Test Report: Hyper-V_Windows 17857

                    
                      6e3ba89264b64b7b6259573ef051dd85e83461cf:2023-12-27:32448
                    
                

Test fail (13/202)

x
+
TestAddons/parallel/Registry (74s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 54.5785ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-s47qm" [06b83650-42ec-4407-8ecc-fcbf792cbcbd] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0056764s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pxzvt" [a526c789-a0bd-4352-a75b-d93d5eaca5f8] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0219247s
addons_test.go:340: (dbg) Run:  kubectl --context addons-839600 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-839600 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-839600 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.9480955s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839600 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839600 ip: (2.769157s)
addons_test.go:364: expected stderr to be -empty- but got: *"W1226 21:53:52.507650    8956 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-839600 ip"
2023/12/26 21:53:55 [DEBUG] GET http://172.21.177.30:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839600 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839600 addons disable registry --alsologtostderr -v=1: (15.9251961s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-839600 -n addons-839600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-839600 -n addons-839600: (13.1599007s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839600 logs -n 25: (10.0542548s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-253200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:46 UTC |                     |
	|         | -p download-only-253200                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-253200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:46 UTC |                     |
	|         | -p download-only-253200                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-253200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:46 UTC |                     |
	|         | -p download-only-253200                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:46 UTC | 26 Dec 23 21:46 UTC |
	| delete  | -p download-only-253200                                                                     | download-only-253200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:46 UTC | 26 Dec 23 21:46 UTC |
	| delete  | -p download-only-253200                                                                     | download-only-253200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:46 UTC | 26 Dec 23 21:46 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-481600 | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:46 UTC |                     |
	|         | binary-mirror-481600                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:60239                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-481600                                                                     | binary-mirror-481600 | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:47 UTC | 26 Dec 23 21:47 UTC |
	| addons  | enable dashboard -p                                                                         | addons-839600        | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:47 UTC |                     |
	|         | addons-839600                                                                               |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-839600        | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:47 UTC |                     |
	|         | addons-839600                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-839600 --wait=true                                                                | addons-839600        | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:47 UTC | 26 Dec 23 21:53 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-839600        | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:53 UTC | 26 Dec 23 21:54 UTC |
	|         | addons-839600                                                                               |                      |                   |         |                     |                     |
	| addons  | addons-839600 addons                                                                        | addons-839600        | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:53 UTC | 26 Dec 23 21:53 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ip      | addons-839600 ip                                                                            | addons-839600        | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:53 UTC | 26 Dec 23 21:53 UTC |
	| addons  | addons-839600 addons disable                                                                | addons-839600        | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:53 UTC | 26 Dec 23 21:54 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-839600        | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:54 UTC |                     |
	|         | -p addons-839600                                                                            |                      |                   |         |                     |                     |
	| ssh     | addons-839600 ssh cat                                                                       | addons-839600        | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:54 UTC |                     |
	|         | /opt/local-path-provisioner/pvc-861b55a7-d7ac-4486-8979-8c51e4270cae_default_test-pvc/file1 |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 21:47:01
	Running on machine: minikube1
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 21:47:01.814547   10896 out.go:296] Setting OutFile to fd 804 ...
	I1226 21:47:01.815536   10896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:47:01.815536   10896 out.go:309] Setting ErrFile to fd 808...
	I1226 21:47:01.815656   10896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:47:01.835817   10896 out.go:303] Setting JSON to false
	I1226 21:47:01.844787   10896 start.go:128] hostinfo: {"hostname":"minikube1","uptime":1620,"bootTime":1703625601,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1226 21:47:01.844928   10896 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 21:47:01.852763   10896 out.go:177] * [addons-839600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1226 21:47:01.855961   10896 notify.go:220] Checking for updates...
	I1226 21:47:01.859446   10896 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 21:47:01.862055   10896 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 21:47:01.864709   10896 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1226 21:47:01.867284   10896 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 21:47:01.869749   10896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 21:47:01.873086   10896 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 21:47:07.424482   10896 out.go:177] * Using the hyperv driver based on user configuration
	I1226 21:47:07.428451   10896 start.go:298] selected driver: hyperv
	I1226 21:47:07.428451   10896 start.go:902] validating driver "hyperv" against <nil>
	I1226 21:47:07.428451   10896 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 21:47:07.477628   10896 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 21:47:07.478935   10896 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 21:47:07.479089   10896 cni.go:84] Creating CNI manager for ""
	I1226 21:47:07.479089   10896 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1226 21:47:07.479089   10896 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1226 21:47:07.479089   10896 start_flags.go:323] config:
	{Name:addons-839600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-839600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:47:07.479089   10896 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 21:47:07.484199   10896 out.go:177] * Starting control plane node addons-839600 in cluster addons-839600
	I1226 21:47:07.487226   10896 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 21:47:07.487226   10896 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1226 21:47:07.487226   10896 cache.go:56] Caching tarball of preloaded images
	I1226 21:47:07.487592   10896 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1226 21:47:07.487592   10896 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1226 21:47:07.488218   10896 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\config.json ...
	I1226 21:47:07.488218   10896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\config.json: {Name:mk351bb6f528e8f35e09f610a03d6160f0cd7681 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:47:07.489067   10896 start.go:365] acquiring machines lock for addons-839600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1226 21:47:07.490062   10896 start.go:369] acquired machines lock for "addons-839600" in 0s
	I1226 21:47:07.490062   10896 start.go:93] Provisioning new machine with config: &{Name:addons-839600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:addons-839600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1226 21:47:07.490062   10896 start.go:125] createHost starting for "" (driver="hyperv")
	I1226 21:47:07.494043   10896 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1226 21:47:07.494298   10896 start.go:159] libmachine.API.Create for "addons-839600" (driver="hyperv")
	I1226 21:47:07.494298   10896 client.go:168] LocalClient.Create starting
	I1226 21:47:07.494673   10896 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1226 21:47:08.093547   10896 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1226 21:47:08.245295   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1226 21:47:10.401020   10896 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1226 21:47:10.401020   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:10.401106   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1226 21:47:12.201278   10896 main.go:141] libmachine: [stdout =====>] : False
	
	I1226 21:47:12.201278   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:12.201395   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1226 21:47:13.716703   10896 main.go:141] libmachine: [stdout =====>] : True
	
	I1226 21:47:13.716779   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:13.716832   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1226 21:47:17.510156   10896 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1226 21:47:17.510382   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:17.513321   10896 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I1226 21:47:17.993817   10896 main.go:141] libmachine: Creating SSH key...
	I1226 21:47:18.245236   10896 main.go:141] libmachine: Creating VM...
	I1226 21:47:18.245236   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1226 21:47:21.119089   10896 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1226 21:47:21.119130   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:21.119130   10896 main.go:141] libmachine: Using switch "Default Switch"
	I1226 21:47:21.119130   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1226 21:47:22.936475   10896 main.go:141] libmachine: [stdout =====>] : True
	
	I1226 21:47:22.937207   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:22.937207   10896 main.go:141] libmachine: Creating VHD
	I1226 21:47:22.937207   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\fixed.vhd' -SizeBytes 10MB -Fixed
	I1226 21:47:26.689746   10896 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5DDD7002-FDC9-4469-B4F6-30F4D68BAA0B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1226 21:47:26.689746   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:26.689746   10896 main.go:141] libmachine: Writing magic tar header
	I1226 21:47:26.689746   10896 main.go:141] libmachine: Writing SSH key tar header
	I1226 21:47:26.699424   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\disk.vhd' -VHDType Dynamic -DeleteSource
	I1226 21:47:29.885946   10896 main.go:141] libmachine: [stdout =====>] : 
	I1226 21:47:29.886197   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:29.886284   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\disk.vhd' -SizeBytes 20000MB
	I1226 21:47:32.401413   10896 main.go:141] libmachine: [stdout =====>] : 
	I1226 21:47:32.401413   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:32.401413   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-839600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I1226 21:47:36.718009   10896 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-839600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1226 21:47:36.718009   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:36.718119   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-839600 -DynamicMemoryEnabled $false
	I1226 21:47:38.914831   10896 main.go:141] libmachine: [stdout =====>] : 
	I1226 21:47:38.915068   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:38.915068   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-839600 -Count 2
	I1226 21:47:41.082156   10896 main.go:141] libmachine: [stdout =====>] : 
	I1226 21:47:41.082156   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:41.082248   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-839600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\boot2docker.iso'
	I1226 21:47:43.690309   10896 main.go:141] libmachine: [stdout =====>] : 
	I1226 21:47:43.690473   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:43.690473   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-839600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\disk.vhd'
	I1226 21:47:46.331351   10896 main.go:141] libmachine: [stdout =====>] : 
	I1226 21:47:46.331351   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:46.331351   10896 main.go:141] libmachine: Starting VM...
	I1226 21:47:46.331351   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-839600
	I1226 21:47:49.583389   10896 main.go:141] libmachine: [stdout =====>] : 
	I1226 21:47:49.583389   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:49.583389   10896 main.go:141] libmachine: Waiting for host to start...
	I1226 21:47:49.583389   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:47:51.896770   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:47:51.896939   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:51.896939   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:47:54.498887   10896 main.go:141] libmachine: [stdout =====>] : 
	I1226 21:47:54.498974   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:55.499338   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:47:57.698924   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:47:57.698996   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:47:57.699137   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:48:00.286675   10896 main.go:141] libmachine: [stdout =====>] : 
	I1226 21:48:00.286922   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:01.302901   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:48:03.543129   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:48:03.543129   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:03.543129   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:48:06.114874   10896 main.go:141] libmachine: [stdout =====>] : 
	I1226 21:48:06.114945   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:07.127169   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:48:09.359368   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:48:09.359368   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:09.359368   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:48:11.963204   10896 main.go:141] libmachine: [stdout =====>] : 
	I1226 21:48:11.963375   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:12.966890   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:48:15.195704   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:48:15.195906   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:15.195992   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:48:17.832559   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:48:17.832708   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:17.832708   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:48:20.016567   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:48:20.016567   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:20.016567   10896 machine.go:88] provisioning docker machine ...
	I1226 21:48:20.016567   10896 buildroot.go:166] provisioning hostname "addons-839600"
	I1226 21:48:20.016567   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:48:22.197424   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:48:22.197424   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:22.197618   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:48:24.743811   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:48:24.744119   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:24.752256   10896 main.go:141] libmachine: Using SSH client type: native
	I1226 21:48:24.763275   10896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb15420] 0xb17f60 <nil>  [] 0s} 172.21.177.30 22 <nil> <nil>}
	I1226 21:48:24.763275   10896 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-839600 && echo "addons-839600" | sudo tee /etc/hostname
	I1226 21:48:24.928034   10896 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-839600
	
	I1226 21:48:24.928115   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:48:27.044400   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:48:27.044649   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:27.044649   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:48:29.607351   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:48:29.607351   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:29.613578   10896 main.go:141] libmachine: Using SSH client type: native
	I1226 21:48:29.614365   10896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb15420] 0xb17f60 <nil>  [] 0s} 172.21.177.30 22 <nil> <nil>}
	I1226 21:48:29.614365   10896 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-839600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-839600/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-839600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 21:48:29.769941   10896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 21:48:29.770247   10896 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1226 21:48:29.770311   10896 buildroot.go:174] setting up certificates
	I1226 21:48:29.770417   10896 provision.go:83] configureAuth start
	I1226 21:48:29.770510   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:48:31.910435   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:48:31.910783   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:31.910783   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:48:34.481534   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:48:34.481534   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:34.481630   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:48:36.645198   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:48:36.645198   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:36.645198   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:48:39.189638   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:48:39.189729   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:39.189729   10896 provision.go:138] copyHostCerts
	I1226 21:48:39.190404   10896 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I1226 21:48:39.191889   10896 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1226 21:48:39.193322   10896 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1226 21:48:39.194894   10896 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-839600 san=[172.21.177.30 172.21.177.30 localhost 127.0.0.1 minikube addons-839600]
	I1226 21:48:39.294914   10896 provision.go:172] copyRemoteCerts
	I1226 21:48:39.306963   10896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 21:48:39.306963   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:48:41.485971   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:48:41.485971   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:41.485971   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:48:44.011448   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:48:44.011448   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:44.011784   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:48:44.123037   10896 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8160744s)
	I1226 21:48:44.123870   10896 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1226 21:48:44.168022   10896 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1226 21:48:44.209612   10896 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 21:48:44.248030   10896 provision.go:86] duration metric: configureAuth took 14.4776132s
	I1226 21:48:44.248648   10896 buildroot.go:189] setting minikube options for container-runtime
	I1226 21:48:44.249447   10896 config.go:182] Loaded profile config "addons-839600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 21:48:44.249447   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:48:46.448646   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:48:46.448851   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:46.448851   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:48:49.052343   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:48:49.052343   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:49.065027   10896 main.go:141] libmachine: Using SSH client type: native
	I1226 21:48:49.065824   10896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb15420] 0xb17f60 <nil>  [] 0s} 172.21.177.30 22 <nil> <nil>}
	I1226 21:48:49.065824   10896 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1226 21:48:49.216166   10896 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1226 21:48:49.216318   10896 buildroot.go:70] root file system type: tmpfs
	I1226 21:48:49.216489   10896 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1226 21:48:49.216614   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:48:51.407530   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:48:51.407623   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:51.407623   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:48:53.952471   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:48:53.952555   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:53.957994   10896 main.go:141] libmachine: Using SSH client type: native
	I1226 21:48:53.958684   10896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb15420] 0xb17f60 <nil>  [] 0s} 172.21.177.30 22 <nil> <nil>}
	I1226 21:48:53.959235   10896 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1226 21:48:54.123148   10896 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1226 21:48:54.123353   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:48:56.273288   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:48:56.273368   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:56.273368   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:48:58.800982   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:48:58.800982   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:48:58.805667   10896 main.go:141] libmachine: Using SSH client type: native
	I1226 21:48:58.806802   10896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb15420] 0xb17f60 <nil>  [] 0s} 172.21.177.30 22 <nil> <nil>}
	I1226 21:48:58.806802   10896 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1226 21:49:00.001644   10896 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1226 21:49:00.001644   10896 machine.go:91] provisioned docker machine in 39.9850765s
	I1226 21:49:00.001644   10896 client.go:171] LocalClient.Create took 1m52.5073456s
	I1226 21:49:00.001739   10896 start.go:167] duration metric: libmachine.API.Create for "addons-839600" took 1m52.5074411s
	I1226 21:49:00.001795   10896 start.go:300] post-start starting for "addons-839600" (driver="hyperv")
	I1226 21:49:00.001795   10896 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 21:49:00.017438   10896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 21:49:00.018440   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:49:02.153012   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:49:02.153228   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:49:02.153228   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:49:04.707493   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:49:04.707493   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:49:04.707732   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:49:04.819658   10896 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8012182s)
	I1226 21:49:04.836316   10896 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 21:49:04.844012   10896 info.go:137] Remote host: Buildroot 2021.02.12
	I1226 21:49:04.844012   10896 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1226 21:49:04.844012   10896 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1226 21:49:04.844886   10896 start.go:303] post-start completed in 4.8430908s
	I1226 21:49:04.847755   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:49:06.993679   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:49:06.993679   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:49:06.993679   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:49:09.536611   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:49:09.536611   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:49:09.537023   10896 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\config.json ...
	I1226 21:49:09.540264   10896 start.go:128] duration metric: createHost completed in 2m2.0502017s
	I1226 21:49:09.540264   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:49:11.669089   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:49:11.669388   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:49:11.669474   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:49:14.240939   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:49:14.240939   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:49:14.247770   10896 main.go:141] libmachine: Using SSH client type: native
	I1226 21:49:14.247884   10896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb15420] 0xb17f60 <nil>  [] 0s} 172.21.177.30 22 <nil> <nil>}
	I1226 21:49:14.247884   10896 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1226 21:49:14.387361   10896 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703627354.385448373
	
	I1226 21:49:14.387464   10896 fix.go:206] guest clock: 1703627354.385448373
	I1226 21:49:14.387464   10896 fix.go:219] Guest: 2023-12-26 21:49:14.385448373 +0000 UTC Remote: 2023-12-26 21:49:09.5402642 +0000 UTC m=+127.898760801 (delta=4.845184173s)
	I1226 21:49:14.387464   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:49:16.560089   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:49:16.560424   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:49:16.560424   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:49:19.131054   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:49:19.131054   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:49:19.136968   10896 main.go:141] libmachine: Using SSH client type: native
	I1226 21:49:19.137622   10896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb15420] 0xb17f60 <nil>  [] 0s} 172.21.177.30 22 <nil> <nil>}
	I1226 21:49:19.137622   10896 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1703627354
	I1226 21:49:19.288411   10896 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 26 21:49:14 UTC 2023
	
	I1226 21:49:19.288474   10896 fix.go:226] clock set: Tue Dec 26 21:49:14 UTC 2023
	 (err=<nil>)
	I1226 21:49:19.288502   10896 start.go:83] releasing machines lock for "addons-839600", held for 2m11.7984115s
	I1226 21:49:19.288502   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:49:21.501729   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:49:21.501729   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:49:21.501831   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:49:24.151831   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:49:24.151831   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:49:24.155816   10896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 21:49:24.155816   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:49:24.168555   10896 ssh_runner.go:195] Run: cat /version.json
	I1226 21:49:24.168555   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:49:26.377975   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:49:26.377975   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:49:26.377975   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:49:26.378240   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:49:26.378240   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:49:26.378493   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:49:29.022002   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:49:29.022002   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:49:29.022200   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:49:29.039190   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:49:29.039388   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:49:29.039566   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:49:29.122975   10896 ssh_runner.go:235] Completed: cat /version.json: (4.9542385s)
	I1226 21:49:29.136816   10896 ssh_runner.go:195] Run: systemctl --version
	I1226 21:49:29.388614   10896 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2327472s)
	I1226 21:49:29.402121   10896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1226 21:49:29.410073   10896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1226 21:49:29.423861   10896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 21:49:29.448113   10896 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1226 21:49:29.448180   10896 start.go:475] detecting cgroup driver to use...
	I1226 21:49:29.448249   10896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 21:49:29.490921   10896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1226 21:49:29.524737   10896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1226 21:49:29.543886   10896 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1226 21:49:29.559599   10896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1226 21:49:29.593569   10896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 21:49:29.626247   10896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1226 21:49:29.657710   10896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 21:49:29.687162   10896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 21:49:29.717760   10896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1226 21:49:29.754246   10896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 21:49:29.784125   10896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 21:49:29.814643   10896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 21:49:30.008128   10896 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1226 21:49:30.037340   10896 start.go:475] detecting cgroup driver to use...
	I1226 21:49:30.055996   10896 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1226 21:49:30.094108   10896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 21:49:30.131099   10896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 21:49:30.183207   10896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 21:49:30.224986   10896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1226 21:49:30.260753   10896 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1226 21:49:30.326368   10896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1226 21:49:30.348183   10896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 21:49:30.393463   10896 ssh_runner.go:195] Run: which cri-dockerd
	I1226 21:49:30.413736   10896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1226 21:49:30.429728   10896 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1226 21:49:30.475785   10896 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1226 21:49:30.663928   10896 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1226 21:49:30.839362   10896 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1226 21:49:30.839702   10896 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1226 21:49:30.884968   10896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 21:49:31.081319   10896 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1226 21:49:32.654356   10896 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5729642s)
	I1226 21:49:32.667630   10896 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1226 21:49:32.852454   10896 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1226 21:49:33.031513   10896 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1226 21:49:33.213392   10896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 21:49:33.397929   10896 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1226 21:49:33.438870   10896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 21:49:33.614382   10896 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1226 21:49:33.718378   10896 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1226 21:49:33.732360   10896 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1226 21:49:33.741204   10896 start.go:543] Will wait 60s for crictl version
	I1226 21:49:33.755310   10896 ssh_runner.go:195] Run: which crictl
	I1226 21:49:33.776429   10896 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 21:49:33.849479   10896 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1226 21:49:33.859437   10896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1226 21:49:33.907701   10896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1226 21:49:33.949441   10896 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1226 21:49:33.949441   10896 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1226 21:49:33.955114   10896 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1226 21:49:33.955114   10896 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1226 21:49:33.955114   10896 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1226 21:49:33.955114   10896 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4e:ec:d4 Flags:up|broadcast|multicast|running}
	I1226 21:49:33.958974   10896 ip.go:210] interface addr: fe80::1f69:6bdb:2000:8fcd/64
	I1226 21:49:33.959056   10896 ip.go:210] interface addr: 172.21.176.1/20
	I1226 21:49:33.975081   10896 ssh_runner.go:195] Run: grep 172.21.176.1	host.minikube.internal$ /etc/hosts
	I1226 21:49:33.984682   10896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.21.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 21:49:34.004520   10896 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 21:49:34.014549   10896 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1226 21:49:34.041089   10896 docker.go:671] Got preloaded images: 
	I1226 21:49:34.041089   10896 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1226 21:49:34.057717   10896 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1226 21:49:34.088090   10896 ssh_runner.go:195] Run: which lz4
	I1226 21:49:34.107110   10896 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1226 21:49:34.114173   10896 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1226 21:49:34.114449   10896 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1226 21:49:36.918062   10896 docker.go:635] Took 2.824148 seconds to copy over tarball
	I1226 21:49:36.931553   10896 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1226 21:49:43.520511   10896 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (6.5881924s)
	I1226 21:49:43.520608   10896 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1226 21:49:43.598808   10896 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1226 21:49:43.616887   10896 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1226 21:49:43.659263   10896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 21:49:43.835792   10896 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1226 21:49:49.724123   10896 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.8876994s)
	I1226 21:49:49.735113   10896 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1226 21:49:49.764933   10896 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1226 21:49:49.765030   10896 cache_images.go:84] Images are preloaded, skipping loading
	I1226 21:49:49.775356   10896 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1226 21:49:49.812558   10896 cni.go:84] Creating CNI manager for ""
	I1226 21:49:49.812830   10896 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1226 21:49:49.812830   10896 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 21:49:49.812830   10896 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.21.177.30 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-839600 NodeName:addons-839600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.21.177.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.21.177.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1226 21:49:49.812830   10896 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.21.177.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-839600"
	  kubeletExtraArgs:
	    node-ip: 172.21.177.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.21.177.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 21:49:49.812830   10896 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-839600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.21.177.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-839600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 21:49:49.826551   10896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1226 21:49:49.842512   10896 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 21:49:49.859002   10896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1226 21:49:49.873951   10896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1226 21:49:49.902755   10896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1226 21:49:49.928301   10896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1226 21:49:49.970587   10896 ssh_runner.go:195] Run: grep 172.21.177.30	control-plane.minikube.internal$ /etc/hosts
	I1226 21:49:49.975707   10896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.21.177.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 21:49:49.991810   10896 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600 for IP: 172.21.177.30
	I1226 21:49:49.991810   10896 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:49:49.991810   10896 certs.go:204] generating minikubeCA CA: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1226 21:49:50.245254   10896 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt ...
	I1226 21:49:50.245254   10896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt: {Name:mk7a559291b59fd1cacf23acd98c76aadd417440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:49:50.247550   10896 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key ...
	I1226 21:49:50.247550   10896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key: {Name:mkbedd9bb05780b48b3744f1500f6ab6cea55798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:49:50.248393   10896 certs.go:204] generating proxyClientCA CA: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1226 21:49:50.424692   10896 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt ...
	I1226 21:49:50.424692   10896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkd3d06d8ce13b6ea5bb86cd17b70e85416bbf21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:49:50.426781   10896 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key ...
	I1226 21:49:50.426781   10896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkf3a613f937d3e2839d9a0e4a8e5134d5e75dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:49:50.428241   10896 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.key
	I1226 21:49:50.428241   10896 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt with IP's: []
	I1226 21:49:50.679343   10896 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt ...
	I1226 21:49:50.679343   10896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: {Name:mk5b4c832f9bf4fa25d7cc7898cdc5f3cdd7b639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:49:50.680189   10896 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.key ...
	I1226 21:49:50.681124   10896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.key: {Name:mk757a360d98e043f18784e8994581f1b403b92d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:49:50.682358   10896 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\apiserver.key.062bae38
	I1226 21:49:50.683072   10896 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\apiserver.crt.062bae38 with IP's: [172.21.177.30 10.96.0.1 127.0.0.1 10.0.0.1]
	I1226 21:49:50.771443   10896 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\apiserver.crt.062bae38 ...
	I1226 21:49:50.771443   10896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\apiserver.crt.062bae38: {Name:mk5d2121ff914555ffaec13b744fed7ddef00d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:49:50.773819   10896 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\apiserver.key.062bae38 ...
	I1226 21:49:50.773819   10896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\apiserver.key.062bae38: {Name:mk6e55645c42a77a82326dae62ce02bb061f8535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:49:50.774754   10896 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\apiserver.crt.062bae38 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\apiserver.crt
	I1226 21:49:50.785724   10896 certs.go:341] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\apiserver.key.062bae38 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\apiserver.key
	I1226 21:49:50.786846   10896 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\proxy-client.key
	I1226 21:49:50.787964   10896 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\proxy-client.crt with IP's: []
	I1226 21:49:50.976112   10896 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\proxy-client.crt ...
	I1226 21:49:50.976112   10896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\proxy-client.crt: {Name:mk3e300deb20f902baafcd0e65c89326f4021cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:49:50.978067   10896 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\proxy-client.key ...
	I1226 21:49:50.978067   10896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\proxy-client.key: {Name:mk6df53ffce98ff6b5cd491bc009f6fea89df3f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:49:50.989915   10896 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1226 21:49:50.991061   10896 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1226 21:49:50.991258   10896 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1226 21:49:50.991504   10896 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1226 21:49:50.992894   10896 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1226 21:49:51.034305   10896 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1226 21:49:51.078655   10896 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1226 21:49:51.118914   10896 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1226 21:49:51.155948   10896 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 21:49:51.197488   10896 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 21:49:51.234413   10896 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 21:49:51.275610   10896 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1226 21:49:51.318511   10896 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 21:49:51.359776   10896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1226 21:49:51.402187   10896 ssh_runner.go:195] Run: openssl version
	I1226 21:49:51.424383   10896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 21:49:51.453931   10896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 21:49:51.459934   10896 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1226 21:49:51.471934   10896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 21:49:51.496787   10896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 21:49:51.529332   10896 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 21:49:51.535413   10896 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 21:49:51.535413   10896 kubeadm.go:404] StartCluster: {Name:addons-839600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.28.4 ClusterName:addons-839600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.21.177.30 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:49:51.545820   10896 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1226 21:49:51.586984   10896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1226 21:49:51.619674   10896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1226 21:49:51.653236   10896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1226 21:49:51.668085   10896 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 21:49:51.668261   10896 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1226 21:49:51.949795   10896 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 21:50:06.375850   10896 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1226 21:50:06.375850   10896 kubeadm.go:322] [preflight] Running pre-flight checks
	I1226 21:50:06.375850   10896 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1226 21:50:06.376441   10896 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1226 21:50:06.376758   10896 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1226 21:50:06.376999   10896 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1226 21:50:06.380213   10896 out.go:204]   - Generating certificates and keys ...
	I1226 21:50:06.380429   10896 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1226 21:50:06.380610   10896 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1226 21:50:06.380610   10896 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1226 21:50:06.380610   10896 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1226 21:50:06.381231   10896 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1226 21:50:06.381355   10896 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1226 21:50:06.381355   10896 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1226 21:50:06.381355   10896 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-839600 localhost] and IPs [172.21.177.30 127.0.0.1 ::1]
	I1226 21:50:06.382043   10896 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1226 21:50:06.382090   10896 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-839600 localhost] and IPs [172.21.177.30 127.0.0.1 ::1]
	I1226 21:50:06.382090   10896 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1226 21:50:06.382090   10896 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1226 21:50:06.382682   10896 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1226 21:50:06.382765   10896 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1226 21:50:06.382765   10896 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1226 21:50:06.382765   10896 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1226 21:50:06.382765   10896 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1226 21:50:06.383300   10896 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1226 21:50:06.383508   10896 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1226 21:50:06.383508   10896 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1226 21:50:06.386363   10896 out.go:204]   - Booting up control plane ...
	I1226 21:50:06.386363   10896 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1226 21:50:06.386363   10896 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1226 21:50:06.386363   10896 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1226 21:50:06.387342   10896 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 21:50:06.387342   10896 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 21:50:06.387342   10896 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1226 21:50:06.387342   10896 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1226 21:50:06.388354   10896 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003581 seconds
	I1226 21:50:06.388354   10896 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1226 21:50:06.388354   10896 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1226 21:50:06.388354   10896 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1226 21:50:06.388354   10896 kubeadm.go:322] [mark-control-plane] Marking the node addons-839600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1226 21:50:06.389351   10896 kubeadm.go:322] [bootstrap-token] Using token: h3gvky.wggsi8q8f2ip8pl5
	I1226 21:50:06.393334   10896 out.go:204]   - Configuring RBAC rules ...
	I1226 21:50:06.393334   10896 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1226 21:50:06.394086   10896 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1226 21:50:06.394356   10896 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1226 21:50:06.394356   10896 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1226 21:50:06.394356   10896 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1226 21:50:06.394356   10896 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1226 21:50:06.395344   10896 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1226 21:50:06.395344   10896 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1226 21:50:06.395344   10896 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1226 21:50:06.395344   10896 kubeadm.go:322] 
	I1226 21:50:06.395344   10896 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1226 21:50:06.395344   10896 kubeadm.go:322] 
	I1226 21:50:06.395344   10896 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1226 21:50:06.395344   10896 kubeadm.go:322] 
	I1226 21:50:06.395344   10896 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1226 21:50:06.395344   10896 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1226 21:50:06.396330   10896 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1226 21:50:06.396330   10896 kubeadm.go:322] 
	I1226 21:50:06.396330   10896 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1226 21:50:06.396330   10896 kubeadm.go:322] 
	I1226 21:50:06.396330   10896 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1226 21:50:06.396330   10896 kubeadm.go:322] 
	I1226 21:50:06.396330   10896 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1226 21:50:06.396330   10896 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1226 21:50:06.396330   10896 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1226 21:50:06.396330   10896 kubeadm.go:322] 
	I1226 21:50:06.396330   10896 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1226 21:50:06.397364   10896 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1226 21:50:06.397364   10896 kubeadm.go:322] 
	I1226 21:50:06.397364   10896 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token h3gvky.wggsi8q8f2ip8pl5 \
	I1226 21:50:06.397364   10896 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59aae3d8e090ab5d371ccfd9b23acdfdebb77e43326e89bce43646e9b1304925 \
	I1226 21:50:06.397364   10896 kubeadm.go:322] 	--control-plane 
	I1226 21:50:06.397364   10896 kubeadm.go:322] 
	I1226 21:50:06.397364   10896 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1226 21:50:06.398339   10896 kubeadm.go:322] 
	I1226 21:50:06.398339   10896 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token h3gvky.wggsi8q8f2ip8pl5 \
	I1226 21:50:06.398339   10896 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59aae3d8e090ab5d371ccfd9b23acdfdebb77e43326e89bce43646e9b1304925 
	I1226 21:50:06.398339   10896 cni.go:84] Creating CNI manager for ""
	I1226 21:50:06.398339   10896 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1226 21:50:06.401341   10896 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1226 21:50:06.415503   10896 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1226 21:50:06.444518   10896 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1226 21:50:06.526613   10896 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1226 21:50:06.542096   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:06.542096   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b minikube.k8s.io/name=addons-839600 minikube.k8s.io/updated_at=2023_12_26T21_50_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:06.570273   10896 ops.go:34] apiserver oom_adj: -16
	I1226 21:50:06.834219   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:07.337822   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:07.843570   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:08.347243   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:08.833878   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:09.337692   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:09.843062   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:10.340880   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:10.842063   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:11.344181   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:11.844824   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:12.347164   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:12.835367   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:13.338635   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:13.839547   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:14.338275   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:14.843221   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:15.343520   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:15.851121   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:16.340618   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:16.845898   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:17.334837   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:17.840645   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:18.344200   10896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:50:18.473556   10896 kubeadm.go:1088] duration metric: took 11.9468533s to wait for elevateKubeSystemPrivileges.
	I1226 21:50:18.473688   10896 kubeadm.go:406] StartCluster complete in 26.9382758s
	I1226 21:50:18.473777   10896 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:50:18.474019   10896 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 21:50:18.474694   10896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:50:18.478146   10896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1226 21:50:18.478146   10896 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I1226 21:50:18.478438   10896 addons.go:69] Setting yakd=true in profile "addons-839600"
	I1226 21:50:18.478438   10896 addons.go:69] Setting gcp-auth=true in profile "addons-839600"
	I1226 21:50:18.478438   10896 addons.go:69] Setting inspektor-gadget=true in profile "addons-839600"
	I1226 21:50:18.478438   10896 addons.go:69] Setting storage-provisioner=true in profile "addons-839600"
	I1226 21:50:18.478438   10896 addons.go:69] Setting registry=true in profile "addons-839600"
	I1226 21:50:18.478438   10896 addons.go:237] Setting addon inspektor-gadget=true in "addons-839600"
	I1226 21:50:18.478438   10896 addons.go:69] Setting ingress-dns=true in profile "addons-839600"
	I1226 21:50:18.478438   10896 addons.go:237] Setting addon ingress-dns=true in "addons-839600"
	I1226 21:50:18.478438   10896 addons.go:69] Setting metrics-server=true in profile "addons-839600"
	I1226 21:50:18.478438   10896 addons.go:237] Setting addon metrics-server=true in "addons-839600"
	I1226 21:50:18.478438   10896 host.go:66] Checking if "addons-839600" exists ...
	I1226 21:50:18.478438   10896 host.go:66] Checking if "addons-839600" exists ...
	I1226 21:50:18.478438   10896 addons.go:237] Setting addon registry=true in "addons-839600"
	I1226 21:50:18.478438   10896 host.go:66] Checking if "addons-839600" exists ...
	I1226 21:50:18.478438   10896 mustload.go:65] Loading cluster: addons-839600
	I1226 21:50:18.478438   10896 addons.go:69] Setting cloud-spanner=true in profile "addons-839600"
	I1226 21:50:18.478438   10896 addons.go:237] Setting addon cloud-spanner=true in "addons-839600"
	I1226 21:50:18.478438   10896 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-839600"
	I1226 21:50:18.478438   10896 host.go:66] Checking if "addons-839600" exists ...
	I1226 21:50:18.478438   10896 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-839600"
	I1226 21:50:18.478438   10896 config.go:182] Loaded profile config "addons-839600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 21:50:18.478438   10896 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-839600"
	I1226 21:50:18.478438   10896 addons.go:69] Setting ingress=true in profile "addons-839600"
	I1226 21:50:18.478438   10896 addons.go:69] Setting default-storageclass=true in profile "addons-839600"
	I1226 21:50:18.478438   10896 addons.go:237] Setting addon yakd=true in "addons-839600"
	I1226 21:50:18.478438   10896 addons.go:69] Setting helm-tiller=true in profile "addons-839600"
	I1226 21:50:18.478438   10896 config.go:182] Loaded profile config "addons-839600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 21:50:18.478438   10896 addons.go:69] Setting volumesnapshots=true in profile "addons-839600"
	I1226 21:50:18.478438   10896 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-839600"
	I1226 21:50:18.478438   10896 addons.go:237] Setting addon storage-provisioner=true in "addons-839600"
	I1226 21:50:18.478438   10896 host.go:66] Checking if "addons-839600" exists ...
	I1226 21:50:18.479455   10896 host.go:66] Checking if "addons-839600" exists ...
	I1226 21:50:18.479455   10896 addons.go:237] Setting addon helm-tiller=true in "addons-839600"
	I1226 21:50:18.479455   10896 addons.go:237] Setting addon ingress=true in "addons-839600"
	I1226 21:50:18.479455   10896 host.go:66] Checking if "addons-839600" exists ...
	I1226 21:50:18.479455   10896 host.go:66] Checking if "addons-839600" exists ...
	I1226 21:50:18.479455   10896 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-839600"
	I1226 21:50:18.479455   10896 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-839600"
	I1226 21:50:18.480443   10896 host.go:66] Checking if "addons-839600" exists ...
	I1226 21:50:18.480443   10896 host.go:66] Checking if "addons-839600" exists ...
	I1226 21:50:18.480443   10896 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-839600"
	I1226 21:50:18.480443   10896 host.go:66] Checking if "addons-839600" exists ...
	I1226 21:50:18.481444   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:18.481444   10896 addons.go:237] Setting addon volumesnapshots=true in "addons-839600"
	I1226 21:50:18.481444   10896 host.go:66] Checking if "addons-839600" exists ...
	I1226 21:50:18.482449   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:18.483445   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:18.483445   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:18.484514   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:18.484514   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:18.484514   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:18.485471   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:18.486460   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:18.487465   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:18.487465   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:18.487465   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:18.487465   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:18.488465   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:18.488465   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:19.159037   10896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.21.176.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1226 21:50:19.330923   10896 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-839600" context rescaled to 1 replicas
	I1226 21:50:19.330923   10896 start.go:223] Will wait 6m0s for node &{Name: IP:172.21.177.30 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1226 21:50:19.349596   10896 out.go:177] * Verifying Kubernetes components...
	I1226 21:50:19.391678   10896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 21:50:24.464150   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:24.464150   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:24.464150   10896 host.go:66] Checking if "addons-839600" exists ...
	I1226 21:50:24.692657   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:24.692657   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:24.694663   10896 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-839600"
	I1226 21:50:24.694663   10896 host.go:66] Checking if "addons-839600" exists ...
	I1226 21:50:24.696664   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:24.698663   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:24.698663   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:24.705658   10896 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I1226 21:50:24.699085   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:24.712160   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:24.715653   10896 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1226 21:50:24.712661   10896 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I1226 21:50:24.712661   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:24.718055   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:24.720661   10896 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1226 21:50:24.724652   10896 out.go:177]   - Using image docker.io/registry:2.8.3
	I1226 21:50:24.724652   10896 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1226 21:50:24.728692   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1226 21:50:24.728692   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:24.724652   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1226 21:50:24.730670   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:24.728692   10896 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I1226 21:50:24.730670   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1226 21:50:24.730670   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:24.731742   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:24.731742   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:24.749313   10896 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1226 21:50:24.757655   10896 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1226 21:50:24.757655   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1226 21:50:24.757655   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:25.016284   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:25.016284   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:25.027284   10896 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I1226 21:50:25.039988   10896 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1226 21:50:25.039988   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1226 21:50:25.039988   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:25.365285   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:25.365285   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:25.371320   10896 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 21:50:25.374429   10896 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 21:50:25.374429   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1226 21:50:25.375430   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:25.380545   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:25.380545   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:25.386421   10896 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1226 21:50:25.395431   10896 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1226 21:50:25.395431   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1226 21:50:25.395431   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:25.469431   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:25.471430   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:25.491893   10896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I1226 21:50:25.495824   10896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1226 21:50:25.493672   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:25.502825   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:25.506824   10896 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1226 21:50:25.509824   10896 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1226 21:50:25.509824   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1226 21:50:25.510815   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:25.522833   10896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1226 21:50:25.537844   10896 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1226 21:50:25.537844   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1226 21:50:25.537844   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:25.790927   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:25.858464   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:25.854461   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:25.894462   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:25.919441   10896 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1226 21:50:25.932431   10896 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1226 21:50:25.932431   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1226 21:50:25.932431   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:25.894462   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:25.932431   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:25.894462   10896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1226 21:50:25.970831   10896 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1226 21:50:26.042234   10896 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1226 21:50:26.042234   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1226 21:50:26.042496   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:26.037095   10896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1226 21:50:26.063096   10896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1226 21:50:26.071092   10896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1226 21:50:26.083095   10896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1226 21:50:26.105344   10896 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1226 21:50:26.117351   10896 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1226 21:50:26.127352   10896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1226 21:50:26.133351   10896 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1226 21:50:26.133351   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1226 21:50:26.133351   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:26.700896   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:26.700896   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:26.703891   10896 addons.go:237] Setting addon default-storageclass=true in "addons-839600"
	I1226 21:50:26.703891   10896 host.go:66] Checking if "addons-839600" exists ...
	I1226 21:50:26.704896   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:26.904116   10896 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.21.176.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.7450788s)
	I1226 21:50:26.904116   10896 start.go:929] {"host.minikube.internal": 172.21.176.1} host record injected into CoreDNS's ConfigMap
	I1226 21:50:26.905116   10896 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (7.5134378s)
	I1226 21:50:26.909116   10896 node_ready.go:35] waiting up to 6m0s for node "addons-839600" to be "Ready" ...
	I1226 21:50:27.332546   10896 node_ready.go:49] node "addons-839600" has status "Ready":"True"
	I1226 21:50:27.332546   10896 node_ready.go:38] duration metric: took 423.4302ms waiting for node "addons-839600" to be "Ready" ...
	I1226 21:50:27.332546   10896 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 21:50:29.253555   10896 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-45tj4" in "kube-system" namespace to be "Ready" ...
	I1226 21:50:30.231315   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:30.231315   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:30.232321   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:50:30.293299   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:30.293299   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:30.293299   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:50:30.308292   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:30.308292   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:30.308292   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:50:30.338303   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:30.338303   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:30.349836   10896 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1226 21:50:30.381397   10896 out.go:177]   - Using image docker.io/busybox:stable
	I1226 21:50:30.389399   10896 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1226 21:50:30.389399   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1226 21:50:30.390017   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:30.868629   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:30.868629   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:30.868629   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:50:31.169660   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:31.169660   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:31.169660   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:50:31.336802   10896 pod_ready.go:102] pod "coredns-5dd5756b68-45tj4" in "kube-system" namespace has status "Ready":"False"
	I1226 21:50:31.548804   10896 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1226 21:50:31.548804   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:31.842738   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:31.842738   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:31.842738   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:50:32.013880   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:32.013880   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:32.013880   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:50:32.291084   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:32.291084   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:32.291084   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:50:32.359084   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:32.359084   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:32.359084   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:50:32.366223   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:32.366223   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:32.366223   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:50:32.421209   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:32.421209   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:32.421209   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:50:33.186048   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:33.186048   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:33.186048   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:50:33.440349   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:33.440349   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:33.440349   10896 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1226 21:50:33.440349   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1226 21:50:33.440349   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:33.796572   10896 pod_ready.go:102] pod "coredns-5dd5756b68-45tj4" in "kube-system" namespace has status "Ready":"False"
	I1226 21:50:35.865028   10896 pod_ready.go:102] pod "coredns-5dd5756b68-45tj4" in "kube-system" namespace has status "Ready":"False"
	I1226 21:50:37.101072   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:37.101072   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:37.101072   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:50:37.338142   10896 pod_ready.go:92] pod "coredns-5dd5756b68-45tj4" in "kube-system" namespace has status "Ready":"True"
	I1226 21:50:37.338142   10896 pod_ready.go:81] duration metric: took 8.0845863s waiting for pod "coredns-5dd5756b68-45tj4" in "kube-system" namespace to be "Ready" ...
	I1226 21:50:37.338142   10896 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4j7jx" in "kube-system" namespace to be "Ready" ...
	I1226 21:50:37.411397   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:50:37.413704   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:37.414391   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:50:37.764162   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:50:37.764162   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:37.764435   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:50:37.859479   10896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1226 21:50:37.872988   10896 pod_ready.go:92] pod "coredns-5dd5756b68-4j7jx" in "kube-system" namespace has status "Ready":"True"
	I1226 21:50:37.873084   10896 pod_ready.go:81] duration metric: took 534.8467ms waiting for pod "coredns-5dd5756b68-4j7jx" in "kube-system" namespace to be "Ready" ...
	I1226 21:50:37.873084   10896 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-839600" in "kube-system" namespace to be "Ready" ...
	I1226 21:50:37.899578   10896 pod_ready.go:92] pod "etcd-addons-839600" in "kube-system" namespace has status "Ready":"True"
	I1226 21:50:37.899578   10896 pod_ready.go:81] duration metric: took 26.494ms waiting for pod "etcd-addons-839600" in "kube-system" namespace to be "Ready" ...
	I1226 21:50:37.899578   10896 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-839600" in "kube-system" namespace to be "Ready" ...
	I1226 21:50:37.928592   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:50:37.928592   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:37.928592   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:50:37.949581   10896 pod_ready.go:92] pod "kube-apiserver-addons-839600" in "kube-system" namespace has status "Ready":"True"
	I1226 21:50:37.949581   10896 pod_ready.go:81] duration metric: took 50.0031ms waiting for pod "kube-apiserver-addons-839600" in "kube-system" namespace to be "Ready" ...
	I1226 21:50:37.949581   10896 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-839600" in "kube-system" namespace to be "Ready" ...
	I1226 21:50:37.964124   10896 pod_ready.go:92] pod "kube-controller-manager-addons-839600" in "kube-system" namespace has status "Ready":"True"
	I1226 21:50:37.964239   10896 pod_ready.go:81] duration metric: took 14.6578ms waiting for pod "kube-controller-manager-addons-839600" in "kube-system" namespace to be "Ready" ...
	I1226 21:50:37.964239   10896 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7jqgh" in "kube-system" namespace to be "Ready" ...
	I1226 21:50:38.001114   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:38.001271   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:38.001409   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:50:38.087401   10896 pod_ready.go:92] pod "kube-proxy-7jqgh" in "kube-system" namespace has status "Ready":"True"
	I1226 21:50:38.087401   10896 pod_ready.go:81] duration metric: took 123.1622ms waiting for pod "kube-proxy-7jqgh" in "kube-system" namespace to be "Ready" ...
	I1226 21:50:38.087401   10896 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-839600" in "kube-system" namespace to be "Ready" ...
	I1226 21:50:38.246184   10896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1226 21:50:38.265359   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:50:38.265394   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:38.265655   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:50:38.346279   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:50:38.346459   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:38.346869   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:50:38.490983   10896 pod_ready.go:92] pod "kube-scheduler-addons-839600" in "kube-system" namespace has status "Ready":"True"
	I1226 21:50:38.490983   10896 pod_ready.go:81] duration metric: took 403.5815ms waiting for pod "kube-scheduler-addons-839600" in "kube-system" namespace to be "Ready" ...
	I1226 21:50:38.491364   10896 pod_ready.go:38] duration metric: took 11.1584371s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 21:50:38.491509   10896 api_server.go:52] waiting for apiserver process to appear ...
	I1226 21:50:38.510048   10896 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I1226 21:50:38.510048   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1226 21:50:38.523032   10896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 21:50:38.748610   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:50:38.748610   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:38.749725   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:50:38.755640   10896 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1226 21:50:38.755640   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1226 21:50:38.832764   10896 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1226 21:50:38.832764   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1226 21:50:38.846831   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:50:38.846831   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:38.846831   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:50:38.863765   10896 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1226 21:50:38.864663   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1226 21:50:38.910777   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:38.910777   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:38.910777   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:50:38.965452   10896 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1226 21:50:38.965487   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1226 21:50:39.085669   10896 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1226 21:50:39.085669   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1226 21:50:39.114676   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:50:39.114676   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:39.114676   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:50:39.116666   10896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1226 21:50:39.208993   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:50:39.208993   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:39.208993   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:50:39.248986   10896 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1226 21:50:39.248986   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1226 21:50:39.270463   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:50:39.270463   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:39.270463   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:50:39.304777   10896 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1226 21:50:39.304855   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1226 21:50:39.308643   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:50:39.311305   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:39.311305   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:50:39.345948   10896 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1226 21:50:39.346030   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1226 21:50:39.367019   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:50:39.367098   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:39.367472   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:50:39.452258   10896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1226 21:50:39.507392   10896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1226 21:50:39.575636   10896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 21:50:39.575636   10896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1226 21:50:39.614358   10896 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1226 21:50:39.614530   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1226 21:50:39.725625   10896 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1226 21:50:39.725625   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1226 21:50:39.746626   10896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1226 21:50:39.855124   10896 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1226 21:50:39.855212   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1226 21:50:39.908491   10896 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I1226 21:50:39.908603   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1226 21:50:39.997955   10896 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1226 21:50:39.997955   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1226 21:50:40.056730   10896 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1226 21:50:40.056730   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1226 21:50:40.098435   10896 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1226 21:50:40.098435   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1226 21:50:40.145674   10896 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1226 21:50:40.145674   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1226 21:50:40.231522   10896 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1226 21:50:40.231522   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1226 21:50:40.234700   10896 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1226 21:50:40.234700   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1226 21:50:40.302928   10896 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1226 21:50:40.302928   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1226 21:50:40.312540   10896 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1226 21:50:40.312627   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1226 21:50:40.464788   10896 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1226 21:50:40.464788   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1226 21:50:40.465633   10896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1226 21:50:40.520867   10896 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1226 21:50:40.520927   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1226 21:50:40.544655   10896 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1226 21:50:40.544655   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1226 21:50:40.752067   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:50:40.752067   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:40.752067   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:50:40.794709   10896 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1226 21:50:40.794709   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1226 21:50:40.816919   10896 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I1226 21:50:40.817027   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1226 21:50:40.823342   10896 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1226 21:50:40.823387   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1226 21:50:40.951533   10896 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1226 21:50:40.951533   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1226 21:50:40.962511   10896 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1226 21:50:40.962565   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1226 21:50:41.035117   10896 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1226 21:50:41.035229   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1226 21:50:41.156493   10896 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1226 21:50:41.156493   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1226 21:50:41.177879   10896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1226 21:50:41.256706   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:50:41.256706   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:41.256706   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:50:41.294506   10896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1226 21:50:41.378140   10896 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1226 21:50:41.378140   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1226 21:50:41.564424   10896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1226 21:50:41.672911   10896 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1226 21:50:41.673059   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1226 21:50:41.818570   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:50:41.822964   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:41.823315   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:50:41.936949   10896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.6907657s)
	I1226 21:50:41.937000   10896 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.4139678s)
	I1226 21:50:41.937000   10896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.0775214s)
	I1226 21:50:41.937131   10896 api_server.go:72] duration metric: took 22.6060775s to wait for apiserver process to appear ...
	I1226 21:50:41.937131   10896 api_server.go:88] waiting for apiserver healthz status ...
	I1226 21:50:41.937234   10896 api_server.go:253] Checking apiserver healthz at https://172.21.177.30:8443/healthz ...
	I1226 21:50:41.947864   10896 api_server.go:279] https://172.21.177.30:8443/healthz returned 200:
	ok
	I1226 21:50:41.950961   10896 api_server.go:141] control plane version: v1.28.4
	I1226 21:50:41.951082   10896 api_server.go:131] duration metric: took 13.9513ms to wait for apiserver health ...
	I1226 21:50:41.951106   10896 system_pods.go:43] waiting for kube-system pods to appear ...
	I1226 21:50:41.971542   10896 system_pods.go:59] 8 kube-system pods found
	I1226 21:50:41.971542   10896 system_pods.go:61] "coredns-5dd5756b68-45tj4" [bab4eeda-2aa3-432a-81c6-3af59eaa560e] Running
	I1226 21:50:41.971635   10896 system_pods.go:61] "coredns-5dd5756b68-4j7jx" [9424e4c2-b17f-4432-b382-0fe5572f41eb] Running
	I1226 21:50:41.971635   10896 system_pods.go:61] "etcd-addons-839600" [34b8eecc-e82c-431b-a6d2-d1f2702d0632] Running
	I1226 21:50:41.971635   10896 system_pods.go:61] "kube-apiserver-addons-839600" [20e2ca31-b617-4225-a3b4-21d0e8f30c09] Running
	I1226 21:50:41.971635   10896 system_pods.go:61] "kube-controller-manager-addons-839600" [0610b2e0-3a67-48d3-8399-8092b6df7e10] Running
	I1226 21:50:41.971635   10896 system_pods.go:61] "kube-ingress-dns-minikube" [785ab428-620d-4f30-9ff0-9c605f0d9d8f] Pending
	I1226 21:50:41.971635   10896 system_pods.go:61] "kube-proxy-7jqgh" [a279ed16-7b31-45f6-8768-89fb105d838d] Running
	I1226 21:50:41.971635   10896 system_pods.go:61] "kube-scheduler-addons-839600" [1ef2e22a-ca13-41c9-8b0f-6f50f5bfcf4a] Running
	I1226 21:50:41.971710   10896 system_pods.go:74] duration metric: took 20.6044ms to wait for pod list to return data ...
	I1226 21:50:41.971710   10896 default_sa.go:34] waiting for default service account to be created ...
	I1226 21:50:41.985421   10896 default_sa.go:45] found service account: "default"
	I1226 21:50:41.985421   10896 default_sa.go:55] duration metric: took 13.7107ms for default service account to be created ...
	I1226 21:50:41.985421   10896 system_pods.go:116] waiting for k8s-apps to be running ...
	I1226 21:50:41.999955   10896 system_pods.go:86] 8 kube-system pods found
	I1226 21:50:42.000011   10896 system_pods.go:89] "coredns-5dd5756b68-45tj4" [bab4eeda-2aa3-432a-81c6-3af59eaa560e] Running
	I1226 21:50:42.000011   10896 system_pods.go:89] "coredns-5dd5756b68-4j7jx" [9424e4c2-b17f-4432-b382-0fe5572f41eb] Running
	I1226 21:50:42.000011   10896 system_pods.go:89] "etcd-addons-839600" [34b8eecc-e82c-431b-a6d2-d1f2702d0632] Running
	I1226 21:50:42.000077   10896 system_pods.go:89] "kube-apiserver-addons-839600" [20e2ca31-b617-4225-a3b4-21d0e8f30c09] Running
	I1226 21:50:42.000077   10896 system_pods.go:89] "kube-controller-manager-addons-839600" [0610b2e0-3a67-48d3-8399-8092b6df7e10] Running
	I1226 21:50:42.000077   10896 system_pods.go:89] "kube-ingress-dns-minikube" [785ab428-620d-4f30-9ff0-9c605f0d9d8f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1226 21:50:42.000077   10896 system_pods.go:89] "kube-proxy-7jqgh" [a279ed16-7b31-45f6-8768-89fb105d838d] Running
	I1226 21:50:42.000077   10896 system_pods.go:89] "kube-scheduler-addons-839600" [1ef2e22a-ca13-41c9-8b0f-6f50f5bfcf4a] Running
	I1226 21:50:42.000158   10896 system_pods.go:126] duration metric: took 14.656ms to wait for k8s-apps to be running ...
	I1226 21:50:42.000158   10896 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 21:50:42.011900   10896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 21:50:42.138917   10896 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1226 21:50:42.202772   10896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1226 21:50:42.231258   10896 addons.go:237] Setting addon gcp-auth=true in "addons-839600"
	I1226 21:50:42.231258   10896 host.go:66] Checking if "addons-839600" exists ...
	I1226 21:50:42.233024   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:43.112744   10896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1226 21:50:44.443895   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:44.443895   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:44.462738   10896 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1226 21:50:44.462738   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-839600 ).state
	I1226 21:50:45.958284   10896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.8416183s)
	I1226 21:50:45.958402   10896 addons.go:473] Verifying addon registry=true in "addons-839600"
	I1226 21:50:45.962590   10896 out.go:177] * Verifying registry addon...
	I1226 21:50:45.968844   10896 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1226 21:50:45.995572   10896 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1226 21:50:45.995572   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:46.483144   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:46.745127   10896 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 21:50:46.745352   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:46.745427   10896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-839600 ).networkadapters[0]).ipaddresses[0]
	I1226 21:50:46.995460   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:47.479999   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:48.010489   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:48.491099   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:48.975616   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:49.489089   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:49.703655   10896 main.go:141] libmachine: [stdout =====>] : 172.21.177.30
	
	I1226 21:50:49.703655   10896 main.go:141] libmachine: [stderr =====>] : 
	I1226 21:50:49.704277   10896 sshutil.go:53] new ssh client: &{IP:172.21.177.30 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-839600\id_rsa Username:docker}
	I1226 21:50:49.997968   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:50.483583   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:50.988337   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:51.477646   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:52.021882   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:52.487812   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:52.977079   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:53.483078   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:54.002463   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:54.358685   10896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (14.9064269s)
	I1226 21:50:54.358685   10896 addons.go:473] Verifying addon ingress=true in "addons-839600"
	I1226 21:50:54.358685   10896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (14.8512289s)
	I1226 21:50:54.358685   10896 addons.go:473] Verifying addon metrics-server=true in "addons-839600"
	I1226 21:50:54.362677   10896 out.go:177] * Verifying ingress addon...
	I1226 21:50:54.358685   10896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (14.7830488s)
	I1226 21:50:54.358685   10896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (14.7830488s)
	I1226 21:50:54.359675   10896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (14.6130491s)
	I1226 21:50:54.359675   10896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (13.894042s)
	I1226 21:50:54.359675   10896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (13.181796s)
	I1226 21:50:54.359675   10896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (13.065169s)
	I1226 21:50:54.359675   10896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (12.7952505s)
	I1226 21:50:54.359675   10896 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (12.3467877s)
	I1226 21:50:54.373683   10896 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-839600 service yakd-dashboard -n yakd-dashboard
	
	
	W1226 21:50:54.369719   10896 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1226 21:50:54.369719   10896 system_svc.go:56] duration metric: took 12.3695611s WaitForService to wait for kubelet.
	I1226 21:50:54.371676   10896 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1226 21:50:54.379678   10896 retry.go:31] will retry after 347.878809ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1226 21:50:54.379678   10896 kubeadm.go:581] duration metric: took 35.0487559s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 21:50:54.379678   10896 node_conditions.go:102] verifying NodePressure condition ...
	I1226 21:50:54.389677   10896 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1226 21:50:54.389677   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:50:54.392677   10896 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 21:50:54.392677   10896 node_conditions.go:123] node cpu capacity is 2
	I1226 21:50:54.393717   10896 node_conditions.go:105] duration metric: took 14.0387ms to run NodePressure ...
	I1226 21:50:54.393717   10896 start.go:228] waiting for startup goroutines ...
	I1226 21:50:54.479344   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:54.749941   10896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1226 21:50:54.911983   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:50:54.983220   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:55.399644   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:50:55.510415   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:55.939196   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:50:55.988002   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:56.389148   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:50:56.487020   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:56.890249   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:50:56.994385   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:57.405190   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:50:57.437940   10896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (15.2349407s)
	I1226 21:50:57.438012   10896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (14.3252682s)
	I1226 21:50:57.438012   10896 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-839600"
	I1226 21:50:57.438091   10896 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (12.9753539s)
	I1226 21:50:57.442065   10896 out.go:177] * Verifying csi-hostpath-driver addon...
	I1226 21:50:57.444918   10896 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1226 21:50:57.447864   10896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1226 21:50:57.446624   10896 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1226 21:50:57.450212   10896 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1226 21:50:57.450212   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1226 21:50:57.520874   10896 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1226 21:50:57.520929   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:50:57.532296   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:57.656128   10896 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1226 21:50:57.656128   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1226 21:50:57.837225   10896 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1226 21:50:57.837284   10896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1226 21:50:57.900826   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:50:57.970014   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:50:57.980207   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:57.991172   10896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1226 21:50:58.393368   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:50:58.457221   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:50:58.489776   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:58.925819   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:50:58.988014   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:50:58.991228   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:59.195149   10896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.4451254s)
	I1226 21:50:59.405580   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:50:59.468429   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:50:59.478584   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:50:59.889812   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:50:59.971310   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:50:59.979014   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:00.388911   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:00.471926   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:00.476230   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:00.897924   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:00.961946   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:00.980107   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:01.097869   10896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.1061443s)
	I1226 21:51:01.105813   10896 addons.go:473] Verifying addon gcp-auth=true in "addons-839600"
	I1226 21:51:01.108885   10896 out.go:177] * Verifying gcp-auth addon...
	I1226 21:51:01.115012   10896 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1226 21:51:01.134525   10896 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1226 21:51:01.134525   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:01.396888   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:01.463241   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:01.485477   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:01.632488   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:01.888746   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:01.965317   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:01.981343   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:02.122403   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:02.392938   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:02.458457   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:02.488996   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:02.628356   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:02.904688   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:02.967286   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:02.981847   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:03.137163   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:03.390403   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:03.470610   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:03.477607   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:03.625655   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:03.895634   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:03.961459   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:03.990110   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:04.132465   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:04.388113   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:04.468690   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:04.481879   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:04.624248   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:04.893575   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:04.959531   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:04.991184   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:05.132083   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:05.405735   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:05.464501   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:05.476574   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:05.631713   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:05.980305   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:05.986138   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:05.994596   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:06.269772   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:06.388261   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:06.469195   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:06.477212   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:06.622184   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:06.897208   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:06.962111   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:06.991770   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:07.135659   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:07.487278   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:07.493971   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:07.497131   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:07.621130   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:07.899749   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:07.982235   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:07.987844   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:08.128938   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:08.398119   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:08.464022   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:08.476587   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:08.635291   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:08.891469   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:08.970488   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:08.976999   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:09.126687   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:09.396071   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:09.461305   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:09.492622   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:09.631046   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:09.900633   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:09.964413   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:09.978501   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:10.132421   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:10.390906   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:10.471710   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:10.480446   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:10.625245   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:10.893746   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:10.958708   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:10.987530   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:11.128342   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:11.400949   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:11.465334   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:11.480294   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:11.621086   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:11.893103   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:11.956955   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:11.985607   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:12.126966   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:12.395657   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:12.462055   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:12.475403   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:12.632412   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:12.888089   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:12.967291   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:12.980910   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:13.123591   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:13.394308   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:13.458970   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:13.490003   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:13.629722   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:13.901182   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:13.965664   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:13.980671   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:14.121175   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:14.391605   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:14.457993   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:14.489084   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:14.626453   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:14.896153   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:14.960714   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:14.990873   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:15.131509   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:15.387756   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:15.470478   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:15.475867   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:15.629943   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:15.898068   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:15.961882   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:15.976035   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:16.138765   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:16.401581   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:16.465490   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:16.480057   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:16.636636   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:16.888001   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:16.968976   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:16.975390   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:17.125595   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:17.394412   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:17.459861   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:17.487196   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:17.627013   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:17.894364   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:17.960969   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:17.989289   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:18.130461   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:18.406594   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:18.468325   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:18.476226   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:18.636389   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:18.890612   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:18.969577   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:18.974581   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:19.127580   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:19.395321   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:19.461619   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:19.490509   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:19.630371   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:19.900996   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:19.966929   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:19.978799   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:20.139358   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:20.388635   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:20.470567   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:20.476304   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:20.626226   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:20.895970   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:20.959604   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:20.990611   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:21.134464   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:21.399231   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:21.464832   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:21.479467   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:21.635040   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:21.888152   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:21.971458   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:21.975321   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:22.128040   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:22.400379   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:22.462968   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:22.475529   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:22.634954   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:22.901327   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:22.967009   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:22.981531   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:23.122884   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:23.390990   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:23.470192   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:23.476055   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:23.627257   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:23.898176   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:23.962062   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:23.976144   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:24.131638   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:24.401574   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:24.467015   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:24.479602   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:24.623015   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:24.894328   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:24.961281   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:24.989194   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:25.128609   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:25.401780   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:25.461338   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:25.474392   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:25.635996   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:25.888173   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:25.972131   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:25.976132   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:26.125992   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:26.776885   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:26.776885   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:26.779685   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:26.783033   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:26.898774   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:26.964963   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:26.990278   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:27.131803   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:27.395738   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:27.459945   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:27.491735   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:27.627887   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:27.898575   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:27.960051   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:27.989408   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:28.128336   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:28.476509   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:28.476886   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:28.485010   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:28.625873   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:28.900946   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:28.973866   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:28.978630   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:29.153689   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:29.412737   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:29.465359   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:29.479556   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:29.622230   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:29.889189   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:29.969364   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:29.974891   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:30.130423   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:30.398147   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:30.463742   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:30.474780   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:30.633920   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:30.888449   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:30.968102   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:30.980973   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:31.124633   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:31.397688   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:31.464772   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:31.476289   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:31.630884   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:31.886560   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:31.967837   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:31.982165   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:32.125361   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:32.396891   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:32.462369   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:32.476169   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:32.628323   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:32.886439   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:32.969580   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:32.975660   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:33.123325   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:33.393759   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:33.456076   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:33.486503   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:33.627930   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:33.896827   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:33.962200   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:33.975687   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:34.131419   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:34.403445   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:34.465210   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:34.479113   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:34.636020   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:34.892919   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:34.971980   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:34.977567   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:35.125734   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:35.394097   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:35.461658   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:35.489334   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:35.630638   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:35.901209   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:35.969382   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:35.974884   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:36.123301   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:36.394424   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:36.459318   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:36.488573   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:36.632341   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:36.901359   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:36.967131   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:36.979249   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:37.135068   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:37.393970   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:37.457072   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:37.486084   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:37.626787   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:37.899781   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:37.963820   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:37.976745   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:38.134311   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:38.390238   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:38.472186   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:38.476889   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:38.626352   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:38.898727   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:39.129915   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:39.130640   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:39.136283   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:39.394167   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:39.472505   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:39.478671   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:39.628083   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:39.897238   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:39.963224   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:39.977417   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:40.136175   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:40.390233   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:40.473348   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:40.478360   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:40.976507   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:40.977168   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:40.978548   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:40.982264   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:41.122789   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:41.394735   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:41.459797   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:41.490718   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:41.635236   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:41.890260   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:41.971463   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:41.978158   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:42.126267   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:42.396693   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:42.459798   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:42.488781   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:42.628507   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:42.899972   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:42.966637   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:42.982162   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:43.135986   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:43.391351   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:44.366669   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:44.370113   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:44.370113   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:44.371416   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:44.377434   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:44.378319   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:44.381206   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:44.386528   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:44.475803   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:44.481368   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:44.821838   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:45.176960   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:45.178042   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:45.180037   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:45.184765   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:45.526938   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:45.533305   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:45.534305   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:45.635494   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:45.899728   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:45.967629   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:45.980336   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:46.137859   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:46.392995   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:46.582481   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:46.587491   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:46.623477   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:46.893384   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:46.958340   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:46.986937   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:47.132914   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:47.390524   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:47.470513   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:47.476827   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:47.627259   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:47.900410   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:47.965299   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:47.978300   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:48.121872   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:48.393455   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:48.460338   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:48.487507   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:48.628890   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:48.901876   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:48.964445   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:48.977841   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:49.121136   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:49.392809   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:49.472691   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:49.477738   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:49.627445   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:49.899426   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:49.965546   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:49.978971   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:50.135687   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:50.389715   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:50.471070   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:50.476295   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:50.628763   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:50.899035   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:50.963623   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:50.976717   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:51.132639   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:51.388448   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:51.470467   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:51.476919   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:51.627165   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:51.895880   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:51.960662   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:51.987769   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:52.130317   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:52.386828   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:52.467915   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:52.480986   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:52.621883   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:52.893907   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:52.972664   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:52.977508   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:53.126342   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:53.394786   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:53.459911   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:53.487920   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:53.631479   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:53.889869   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:53.968451   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:53.983865   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:54.125098   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:54.394918   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:54.460019   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:54.490458   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:54.633034   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:54.890340   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:54.971802   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:54.981513   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:55.127646   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:55.396116   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:55.458576   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:55.491201   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:55.632314   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:55.886723   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:55.965516   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:55.979549   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:56.134496   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:56.388095   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:56.717906   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:56.722239   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:56.724406   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:56.889526   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:56.970969   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:56.976737   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:57.124359   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:57.392955   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:57.473878   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:57.478076   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:57.855620   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:57.886752   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:58.218731   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:58.223937   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:58.225001   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:58.390886   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:58.471767   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:58.477417   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:58.622947   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:58.896566   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:58.959469   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:58.986594   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:59.126803   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:59.400461   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:59.469522   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:59.477163   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:51:59.636133   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:51:59.891837   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:51:59.969750   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:51:59.975667   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:52:00.126529   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:00.398273   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:00.461970   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:00.476083   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:52:00.630501   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:01.078856   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:01.081510   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:01.084228   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:52:01.129746   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:01.398547   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:01.461855   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:01.491077   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:52:01.629614   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:01.899690   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:02.012167   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:02.013142   10896 kapi.go:107] duration metric: took 1m16.0442978s to wait for kubernetes.io/minikube-addons=registry ...
	I1226 21:52:02.120146   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:02.473639   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:02.474612   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:02.627630   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:02.891223   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:02.975747   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:03.126730   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:03.398075   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:03.462829   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:03.635425   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:03.889485   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:03.970062   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:04.133438   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:04.400850   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:04.465344   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:04.635855   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:04.887797   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:04.970482   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:05.128184   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:05.397123   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:05.463112   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:05.634500   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:05.890132   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:05.971149   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:06.123996   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:06.396620   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:06.460593   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:06.631974   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:06.902773   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:06.969557   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:07.124909   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:07.398480   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:07.458288   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:07.633899   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:07.891575   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:07.971114   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:08.126453   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:08.397027   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:08.460752   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:08.632973   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:08.900674   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:08.964308   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:09.122233   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:09.394574   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:09.458643   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:09.631010   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:09.928597   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:10.419080   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:10.419654   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:10.424138   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:10.466397   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:11.126179   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:11.127725   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:11.130713   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:11.132707   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:11.392057   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:11.476879   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:11.629644   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:11.898515   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:11.964545   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:12.133550   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:12.400948   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:12.472631   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:12.633925   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:12.889959   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:12.969722   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:13.123370   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:13.394093   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:13.457894   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:13.628345   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:13.899552   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:13.963518   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:14.135865   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:14.388675   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:14.473868   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:14.626541   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:15.164453   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:15.164453   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:15.168884   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:15.397180   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:15.462097   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:15.632233   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:15.901859   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:15.969766   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:16.135760   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:16.389670   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:16.473890   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:16.626155   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:16.897807   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:16.964311   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:17.133362   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:17.387970   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:17.472406   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:17.625151   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:17.899318   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:18.069664   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:18.134787   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:18.404801   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:18.503085   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:18.633479   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:18.892929   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:18.957299   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:19.130772   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:19.385791   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:19.462249   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:19.633795   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:19.887160   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:19.969971   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:20.127532   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:20.392961   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:20.459550   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:20.631711   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:20.888258   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:20.970082   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:21.125525   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:21.397382   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:21.462520   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:21.638395   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:21.890232   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:21.969921   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:22.126342   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:22.396078   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:22.461157   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:22.635082   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:22.905536   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:22.972160   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:23.127640   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:23.390290   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:23.474769   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:23.626076   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:23.893253   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:23.960941   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:24.129213   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:24.401416   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:24.467661   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:24.620963   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:24.892781   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:24.959781   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:25.127101   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:25.400115   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:25.464778   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:25.636729   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:25.891307   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:25.957556   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:26.128810   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:26.402707   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:26.466999   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:26.623594   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:26.897458   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:26.959786   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:27.132106   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:27.404956   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:27.464185   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:27.632681   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:27.899555   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:27.964592   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:28.136437   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:28.533478   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:28.533593   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:29.300733   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:29.301377   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:29.304626   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:29.321547   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:29.387352   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:29.468658   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:29.623234   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:29.897061   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:30.030581   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:30.129617   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:30.401554   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:30.466912   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:30.635296   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:30.887099   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:30.968464   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:31.123359   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:31.427155   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:31.464053   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:31.630585   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:31.896948   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:31.966125   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:32.122429   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:32.390399   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:32.474381   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:32.626631   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:32.888112   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:32.958274   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:33.130777   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:33.826692   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:33.827226   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:33.832858   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:34.020129   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:34.022953   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:34.125194   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:34.403315   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:34.461521   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:34.631552   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:34.896726   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:34.960979   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:35.127986   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:35.396329   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:35.461332   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:35.630874   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:35.901559   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:35.973177   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:36.121385   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:36.393186   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:36.459025   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:36.627337   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:36.895933   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:36.960494   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:37.133243   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:37.401876   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:37.467871   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:37.632898   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:37.896237   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:37.966249   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:38.122237   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:38.395653   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:38.458084   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:38.631235   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:38.886851   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:38.963190   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:39.140412   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:39.396223   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:39.465092   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:39.631690   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:39.887568   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:39.969453   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:40.125775   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:40.395761   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:40.462177   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:40.631497   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:41.020266   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:41.023290   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:41.273865   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:41.399599   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:41.466029   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:41.627991   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:41.895427   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:41.958940   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:42.129922   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:42.398733   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:42.462679   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:42.632067   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:42.890144   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:42.989681   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:43.126375   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:43.397704   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:43.461365   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:43.634027   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:43.889759   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:43.971503   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:44.126344   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:44.395711   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:44.460196   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:45.025941   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:45.030292   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:45.030628   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:45.131496   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:45.392141   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:45.469154   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:45.633117   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:45.899539   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:45.963947   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:46.131716   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:46.387580   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:46.470341   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:46.626013   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:46.895752   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:46.963184   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:47.133257   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:47.388487   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:47.471351   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:47.625161   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:47.894957   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:47.959939   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:48.135576   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:48.471591   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:48.474862   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:48.630438   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:48.897429   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:48.962955   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:49.134432   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:49.394280   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:49.469987   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:49.627516   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:49.900133   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:49.966799   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:50.136130   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:50.390739   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:50.471536   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:50.623690   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:50.893103   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:50.959825   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:51.130810   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:51.400805   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:51.907417   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:51.909032   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:51.909923   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:51.968638   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:52.137309   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:52.387675   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:52.465453   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:52.623818   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:52.889641   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:52.969069   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:53.124726   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:53.397306   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:53.460696   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:53.630917   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:53.897517   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:53.961076   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:54.132458   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:54.399993   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:54.464393   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:54.621541   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:54.891287   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:55.384627   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:55.385310   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:55.390494   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:55.459420   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:55.628758   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:55.898146   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:55.963304   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:56.135070   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:56.392898   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:56.469904   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:56.623224   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:56.894618   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:56.966023   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:57.134238   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:57.388915   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:57.471012   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:57.626827   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:57.896150   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:57.960055   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:58.128707   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:58.404257   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:58.468284   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:58.636063   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:58.892586   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:58.957170   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:59.129845   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:59.564885   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:59.584777   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:52:59.626502   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:52:59.895646   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:52:59.960455   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:00.133578   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:00.390410   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:00.472395   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:00.629568   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:00.893079   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:00.957282   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:01.127334   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:01.396663   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:01.461065   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:01.635596   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:01.888039   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:01.969193   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:02.123376   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:02.393739   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:02.457675   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:02.629880   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:03.125201   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:03.127247   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:03.128076   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:03.406889   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:03.467653   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:03.631495   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:03.892911   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:03.957693   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:04.126309   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:04.396943   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:04.462408   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:04.630705   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:04.900328   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:04.967258   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:05.142332   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:05.394226   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:05.460756   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:05.632907   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:05.897439   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:05.965079   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:06.134076   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:06.401104   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:06.464900   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:06.636955   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:06.903222   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:06.966001   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:07.123597   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:07.390881   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:07.472638   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:07.624575   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:07.899091   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:07.962296   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:08.134697   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:08.390433   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:08.468064   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:08.622072   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:08.893722   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:08.958275   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:09.128226   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:09.397074   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:09.462344   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:09.632325   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:09.901049   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:10.171657   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:10.171657   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:10.394679   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:10.460763   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:10.631680   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:10.900476   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:10.965689   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:11.133679   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:11.389698   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:11.470497   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:11.630594   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:11.897176   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:11.962816   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:12.134175   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:12.388056   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:12.467580   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:12.625812   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:12.895724   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:12.963054   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:13.135178   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:13.388874   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:13.470782   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:13.914931   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:13.917544   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:14.109199   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:14.188001   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:14.395867   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:14.460468   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:53:14.632042   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:14.898566   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:14.964341   10896 kapi.go:107] duration metric: took 2m17.5177167s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1226 21:53:15.133199   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:15.392374   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:15.634807   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:15.903475   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:16.122461   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:16.392474   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:16.629073   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:16.898690   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:17.576729   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:17.577732   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:17.625173   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:17.896638   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:18.133092   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:18.402450   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:18.623272   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:18.895128   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:19.133936   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:19.387718   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:19.623839   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:19.897907   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:20.134443   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:20.389004   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:20.626220   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:20.895786   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:21.133302   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:21.391201   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:21.626710   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:21.899232   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:22.133579   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:22.390430   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:22.626323   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:22.896577   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:23.134225   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:23.391466   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:23.625788   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:23.895407   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:24.131172   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:24.387854   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:24.625260   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:24.896267   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:25.134744   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:25.387772   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:25.629366   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:25.896866   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:26.130580   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:26.394288   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:26.629746   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:26.900422   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:27.121996   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:27.393246   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:27.629534   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:27.900516   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:28.121710   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:28.393172   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:28.621296   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:28.893088   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:29.818963   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:30.170202   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:30.178710   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:30.178710   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:30.288396   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:30.407642   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:30.641361   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:30.902985   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:31.125341   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:31.394288   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:31.629902   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:31.900497   10896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:53:32.136577   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:32.405968   10896 kapi.go:107] duration metric: took 2m38.0342919s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1226 21:53:32.629961   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:33.131335   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:33.625207   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:34.120414   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:34.779660   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:35.133470   10896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:53:35.623429   10896 kapi.go:107] duration metric: took 2m34.508417s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1226 21:53:35.627087   10896 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-839600 cluster.
	I1226 21:53:35.629796   10896 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1226 21:53:35.632563   10896 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1226 21:53:35.635550   10896 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, metrics-server, helm-tiller, inspektor-gadget, storage-provisioner, nvidia-device-plugin, yakd, storage-provisioner-rancher, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1226 21:53:35.639747   10896 addons.go:508] enable addons completed in 3m17.1616011s: enabled=[cloud-spanner ingress-dns metrics-server helm-tiller inspektor-gadget storage-provisioner nvidia-device-plugin yakd storage-provisioner-rancher default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1226 21:53:35.639747   10896 start.go:233] waiting for cluster config update ...
	I1226 21:53:35.639747   10896 start.go:242] writing updated cluster config ...
	I1226 21:53:35.654758   10896 ssh_runner.go:195] Run: rm -f paused
	I1226 21:53:36.026903   10896 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1226 21:53:36.030848   10896 out.go:177] * Done! kubectl is now configured to use "addons-839600" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Tue 2023-12-26 21:48:08 UTC, ends at Tue 2023-12-26 21:54:33 UTC. --
	Dec 26 21:54:20 addons-839600 dockerd[1328]: time="2023-12-26T21:54:20.396589749Z" level=info msg="shim disconnected" id=a6dd98024b93019c5be9c88cc193b1aca9266d0bf1372a39622358d14539b13c namespace=moby
	Dec 26 21:54:20 addons-839600 dockerd[1328]: time="2023-12-26T21:54:20.396779550Z" level=warning msg="cleaning up after shim disconnected" id=a6dd98024b93019c5be9c88cc193b1aca9266d0bf1372a39622358d14539b13c namespace=moby
	Dec 26 21:54:20 addons-839600 dockerd[1328]: time="2023-12-26T21:54:20.396999951Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 26 21:54:21 addons-839600 dockerd[1322]: time="2023-12-26T21:54:21.235594058Z" level=info msg="ignoring event" container=b83697e8b0b713afe4c41cdef61bee5d4b6fa9d54836cb80ac307910d4dacf4b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 21:54:21 addons-839600 dockerd[1328]: time="2023-12-26T21:54:21.237522070Z" level=info msg="shim disconnected" id=b83697e8b0b713afe4c41cdef61bee5d4b6fa9d54836cb80ac307910d4dacf4b namespace=moby
	Dec 26 21:54:21 addons-839600 dockerd[1328]: time="2023-12-26T21:54:21.237909272Z" level=warning msg="cleaning up after shim disconnected" id=b83697e8b0b713afe4c41cdef61bee5d4b6fa9d54836cb80ac307910d4dacf4b namespace=moby
	Dec 26 21:54:21 addons-839600 dockerd[1328]: time="2023-12-26T21:54:21.237942872Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 26 21:54:28 addons-839600 dockerd[1328]: time="2023-12-26T21:54:28.108477287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 26 21:54:28 addons-839600 dockerd[1328]: time="2023-12-26T21:54:28.108670488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 21:54:28 addons-839600 dockerd[1328]: time="2023-12-26T21:54:28.108696888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 26 21:54:28 addons-839600 dockerd[1328]: time="2023-12-26T21:54:28.108709489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 21:54:28 addons-839600 cri-dockerd[1215]: time="2023-12-26T21:54:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e88e3628fc91af30a54772318f9b7b5a23ec5bf3297a65373601588381e1afc9/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 26 21:54:29 addons-839600 cri-dockerd[1215]: time="2023-12-26T21:54:29Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Dec 26 21:54:29 addons-839600 dockerd[1328]: time="2023-12-26T21:54:29.690331720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 26 21:54:29 addons-839600 dockerd[1328]: time="2023-12-26T21:54:29.690602321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 21:54:29 addons-839600 dockerd[1328]: time="2023-12-26T21:54:29.690727122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 26 21:54:29 addons-839600 dockerd[1328]: time="2023-12-26T21:54:29.691041324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 21:54:30 addons-839600 dockerd[1328]: time="2023-12-26T21:54:30.737277638Z" level=info msg="shim disconnected" id=68c54007e882ee2174139b8f4d3fb28d9697c540e999e24d73acd335f4338102 namespace=moby
	Dec 26 21:54:30 addons-839600 dockerd[1328]: time="2023-12-26T21:54:30.739546350Z" level=warning msg="cleaning up after shim disconnected" id=68c54007e882ee2174139b8f4d3fb28d9697c540e999e24d73acd335f4338102 namespace=moby
	Dec 26 21:54:30 addons-839600 dockerd[1328]: time="2023-12-26T21:54:30.740240653Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 26 21:54:30 addons-839600 dockerd[1322]: time="2023-12-26T21:54:30.741279259Z" level=info msg="ignoring event" container=68c54007e882ee2174139b8f4d3fb28d9697c540e999e24d73acd335f4338102 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 21:54:30 addons-839600 dockerd[1322]: time="2023-12-26T21:54:30.904737133Z" level=info msg="ignoring event" container=cb92c5edf493373484628354b8c16af2bef20981f242af624bdfac576f9eb3b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 21:54:30 addons-839600 dockerd[1328]: time="2023-12-26T21:54:30.908593153Z" level=info msg="shim disconnected" id=cb92c5edf493373484628354b8c16af2bef20981f242af624bdfac576f9eb3b0 namespace=moby
	Dec 26 21:54:30 addons-839600 dockerd[1328]: time="2023-12-26T21:54:30.908903555Z" level=warning msg="cleaning up after shim disconnected" id=cb92c5edf493373484628354b8c16af2bef20981f242af624bdfac576f9eb3b0 namespace=moby
	Dec 26 21:54:30 addons-839600 dockerd[1328]: time="2023-12-26T21:54:30.908925355Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	bb0a5b4c466f1       nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026                                                                4 seconds ago        Running             task-pv-container                        0                   e88e3628fc91a       task-pv-pod-restore
	0182a52196288       busybox@sha256:ba76950ac9eaa407512c9d859cea48114eeff8a6f12ebaa5d32ce79d4a017dd8                                                              15 seconds ago       Exited              busybox                                  0                   b83697e8b0b71       test-local-path
	2bed63887be4b       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                                              21 seconds ago       Exited              helper-pod                               0                   e2286fce94c15       helper-pod-create-pvc-861b55a7-d7ac-4486-8979-8c51e4270cae
	ed45968ebc078       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                                 59 seconds ago       Running             gcp-auth                                 0                   f4720f3efb9b0       gcp-auth-d4c87556c-4k79n
	cca011958f1e5       registry.k8s.io/ingress-nginx/controller@sha256:b3aba22b1da80e7acfc52b115cae1d4c687172cbf2b742d5b502419c25ff340e                             About a minute ago   Running             controller                               0                   dea87e6a01af0       ingress-nginx-controller-69cff4fd79-k7k24
	67c7c503da110       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   330248b0d7920       csi-hostpathplugin-8pgrw
	1bdd07977fb6f       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   330248b0d7920       csi-hostpathplugin-8pgrw
	6be5e39056b6f       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   330248b0d7920       csi-hostpathplugin-8pgrw
	ac46c61215f21       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   330248b0d7920       csi-hostpathplugin-8pgrw
	3f836302669b6       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   330248b0d7920       csi-hostpathplugin-8pgrw
	9f3d20ee6bc1c       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   96dd6e186d76d       csi-hostpath-resizer-0
	9f2a945d914c7       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   330248b0d7920       csi-hostpathplugin-8pgrw
	c81cb286ab055       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   0c16e6047316b       csi-hostpath-attacher-0
	bac6bbd781f61       1ebff0f9671bc                                                                                                                                About a minute ago   Exited              patch                                    1                   69a652324bb8f       ingress-nginx-admission-patch-v2jgj
	3ab39bc65d188       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80                   About a minute ago   Exited              create                                   0                   3a5aa15331401       ingress-nginx-admission-create-txmg5
	809e400f39393       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   0a5b9acb9fcdb       snapshot-controller-58dbcc7b99-zdxtm
	dc7dbee0828f2       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   8697edee5d91a       snapshot-controller-58dbcc7b99-4sczr
	494df7c140649       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       About a minute ago   Running             local-path-provisioner                   0                   778301f85cfe2       local-path-provisioner-78b46b4d5c-fb7m5
	b6f2589dba22d       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  2 minutes ago        Running             tiller                                   0                   d070b93745b20       tiller-deploy-7b677967b9-4xqr4
	036776c404cfb       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   0757ea4de371c       yakd-dashboard-9947fc6bf-45j2w
	0c26961589871       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             3 minutes ago        Running             minikube-ingress-dns                     0                   080ef0ad8cede       kube-ingress-dns-minikube
	8632f86788627       gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49                               3 minutes ago        Running             cloud-spanner-emulator                   0                   910c74488d285       cloud-spanner-emulator-64c8c85f65-z5tw5
	627e39e69e6ed       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   224523875bfbf       storage-provisioner
	dd9c74e6c6eea       ead0a4a53df89                                                                                                                                4 minutes ago        Running             coredns                                  0                   fe16cb5bbe09d       coredns-5dd5756b68-4j7jx
	f19cbe0604c5e       83f6cc407eed8                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   e04ad40db01df       kube-proxy-7jqgh
	63321452cb516       73deb9a3f7025                                                                                                                                4 minutes ago        Running             etcd                                     0                   e3c15c166ccb0       etcd-addons-839600
	adff06e2f005d       e3db313c6dbc0                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   f87813f8f26f6       kube-scheduler-addons-839600
	9031d364393ea       d058aa5ab969c                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   4c3e171669e8c       kube-controller-manager-addons-839600
	32e7bd1104e51       7fe0e6f37db33                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   22dec260fdf21       kube-apiserver-addons-839600
	
	
	==> controller_ingress [cca011958f1e] <==
	W1226 21:53:30.816303       7 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I1226 21:53:30.816908       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I1226 21:53:30.831624       7 main.go:249] "Running in Kubernetes cluster" major="1" minor="28" git="v1.28.4" state="clean" commit="bae2c62678db2b5053817bc97181fcc2e8388103" platform="linux/amd64"
	I1226 21:53:31.083638       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I1226 21:53:31.132797       7 ssl.go:536] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I1226 21:53:31.156653       7 nginx.go:260] "Starting NGINX Ingress controller"
	I1226 21:53:31.190215       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"dbcaf90a-888c-4c80-9ae7-2870ed3ae36b", APIVersion:"v1", ResourceVersion:"716", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I1226 21:53:31.197026       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"e699a820-ae74-44f7-8452-0fdfce45856b", APIVersion:"v1", ResourceVersion:"718", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I1226 21:53:31.197071       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"a71b4152-3942-4fa7-a095-9f5c2ce0f0c8", APIVersion:"v1", ResourceVersion:"719", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I1226 21:53:32.359394       7 nginx.go:303] "Starting NGINX process"
	I1226 21:53:32.359483       7 leaderelection.go:245] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I1226 21:53:32.362544       7 nginx.go:323] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I1226 21:53:32.367338       7 controller.go:190] "Configuration changes detected, backend reload required"
	I1226 21:53:32.402379       7 leaderelection.go:255] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I1226 21:53:32.403989       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-69cff4fd79-k7k24"
	I1226 21:53:32.439821       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-69cff4fd79-k7k24" node="addons-839600"
	I1226 21:53:32.683291       7 controller.go:210] "Backend successfully reloaded"
	I1226 21:53:32.683397       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I1226 21:53:32.683432       7 event.go:298] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-69cff4fd79-k7k24", UID:"38598a6a-3810-46f3-96db-2e0af8aa45dd", APIVersion:"v1", ResourceVersion:"746", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         f503c4bb5fa7d857ad29e94970eb550c2bc00b7c
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.21.6
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [dd9c74e6c6ee] <==
	[INFO] 10.244.0.6:38852 - 23057 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000361s
	[INFO] 10.244.0.6:53811 - 9929 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000294399s
	[INFO] 10.244.0.6:53811 - 10443 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.002455193s
	[INFO] 10.244.0.6:41236 - 41145 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0000737s
	[INFO] 10.244.0.6:41236 - 5309 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0001902s
	[INFO] 10.244.0.6:50947 - 46152 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000275499s
	[INFO] 10.244.0.6:50947 - 22596 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001329s
	[INFO] 10.244.0.6:34924 - 7474 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000525s
	[INFO] 10.244.0.6:34924 - 24125 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001752s
	[INFO] 10.244.0.6:37492 - 14375 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000191599s
	[INFO] 10.244.0.6:37492 - 55072 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0001884s
	[INFO] 10.244.0.6:48648 - 62097 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000203099s
	[INFO] 10.244.0.6:40350 - 23273 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0002111s
	[INFO] 10.244.0.6:37922 - 55176 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000585s
	[INFO] 10.244.0.6:37268 - 15960 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000115799s
	[INFO] 10.244.0.22:56315 - 29945 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000294804s
	[INFO] 10.244.0.22:60469 - 51415 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000176403s
	[INFO] 10.244.0.22:58844 - 31072 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000428805s
	[INFO] 10.244.0.22:56821 - 46814 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000082401s
	[INFO] 10.244.0.22:58044 - 41991 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000183002s
	[INFO] 10.244.0.22:50046 - 7689 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000079001s
	[INFO] 10.244.0.22:43021 - 33128 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.005413662s
	[INFO] 10.244.0.22:37125 - 31135 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.006289072s
	[INFO] 10.244.0.23:58366 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000403404s
	[INFO] 10.244.0.23:38787 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000114201s
	
	
	==> describe nodes <==
	Name:               addons-839600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-839600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=addons-839600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_26T21_50_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-839600
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-839600"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Dec 2023 21:50:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-839600
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Dec 2023 21:54:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Dec 2023 21:54:14 +0000   Tue, 26 Dec 2023 21:49:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Dec 2023 21:54:14 +0000   Tue, 26 Dec 2023 21:49:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Dec 2023 21:54:14 +0000   Tue, 26 Dec 2023 21:49:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Dec 2023 21:54:14 +0000   Tue, 26 Dec 2023 21:50:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.21.177.30
	  Hostname:    addons-839600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914588Ki
	  pods:               110
	System Info:
	  Machine ID:                 d5eb95f0bdf741e680348bdb941db2bd
	  System UUID:                b3404dd6-4480-4042-bd45-dfc2a77c95a9
	  Boot ID:                    1f41e394-3969-416c-b31b-cf519ead7bbe
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-64c8c85f65-z5tw5                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  default                     task-pv-pod-restore                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  gcp-auth                    gcp-auth-d4c87556c-4k79n                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  ingress-nginx               ingress-nginx-controller-69cff4fd79-k7k24                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m39s
	  kube-system                 coredns-5dd5756b68-4j7jx                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m13s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 csi-hostpathplugin-8pgrw                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 etcd-addons-839600                                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-apiserver-addons-839600                                  250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 kube-controller-manager-addons-839600                         200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-proxy-7jqgh                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-scheduler-addons-839600                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 snapshot-controller-58dbcc7b99-4sczr                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 snapshot-controller-58dbcc7b99-zdxtm                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 tiller-deploy-7b677967b9-4xqr4                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  local-path-storage          helper-pod-delete-pvc-861b55a7-d7ac-4486-8979-8c51e4270cae    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  local-path-storage          local-path-provisioner-78b46b4d5c-fb7m5                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-45j2w                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m                     kube-proxy       
	  Normal  NodeHasSufficientMemory  4m37s (x8 over 4m37s)  kubelet          Node addons-839600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m37s (x8 over 4m37s)  kubelet          Node addons-839600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m37s (x7 over 4m37s)  kubelet          Node addons-839600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m27s                  kubelet          Node addons-839600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s                  kubelet          Node addons-839600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s                  kubelet          Node addons-839600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m25s                  kubelet          Node addons-839600 status is now: NodeReady
	  Normal  RegisteredNode           4m15s                  node-controller  Node addons-839600 event: Registered Node addons-839600 in Controller
	
	
	==> dmesg <==
	[  +0.231873] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +1.367801] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.408181] systemd-fstab-generator[1160]: Ignoring "noauto" for root device
	[  +0.183617] systemd-fstab-generator[1171]: Ignoring "noauto" for root device
	[  +0.180705] systemd-fstab-generator[1182]: Ignoring "noauto" for root device
	[  +0.175635] systemd-fstab-generator[1193]: Ignoring "noauto" for root device
	[  +0.225879] systemd-fstab-generator[1207]: Ignoring "noauto" for root device
	[ +10.223240] systemd-fstab-generator[1313]: Ignoring "noauto" for root device
	[  +5.686389] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.202729] systemd-fstab-generator[1678]: Ignoring "noauto" for root device
	[  +0.811891] kauditd_printk_skb: 29 callbacks suppressed
	[Dec26 21:50] systemd-fstab-generator[2666]: Ignoring "noauto" for root device
	[ +31.153523] kauditd_printk_skb: 24 callbacks suppressed
	[ +10.513663] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.682938] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.142236] kauditd_printk_skb: 8 callbacks suppressed
	[Dec26 21:51] kauditd_printk_skb: 51 callbacks suppressed
	[Dec26 21:52] kauditd_printk_skb: 20 callbacks suppressed
	[Dec26 21:53] kauditd_printk_skb: 26 callbacks suppressed
	[ +14.752020] kauditd_printk_skb: 3 callbacks suppressed
	[ +21.632972] kauditd_printk_skb: 24 callbacks suppressed
	[Dec26 21:54] hrtimer: interrupt took 2713721 ns
	[  +7.465685] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.666151] kauditd_printk_skb: 10 callbacks suppressed
	[ +12.310987] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [63321452cb51] <==
	{"level":"warn","ts":"2023-12-26T21:54:09.682449Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-26T21:54:09.368741Z","time spent":"313.69983ms","remote":"127.0.0.1:46766","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":4162,"request content":"key:\"/registry/pods/yakd-dashboard/\" range_end:\"/registry/pods/yakd-dashboard0\" "}
	{"level":"warn","ts":"2023-12-26T21:54:09.682642Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.220186ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8554"}
	{"level":"info","ts":"2023-12-26T21:54:09.682776Z","caller":"traceutil/trace.go:171","msg":"trace[716707268] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1460; }","duration":"237.351187ms","start":"2023-12-26T21:54:09.445413Z","end":"2023-12-26T21:54:09.682765Z","steps":["trace[716707268] 'agreement among raft nodes before linearized reading'  (duration: 237.185186ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:54:09.68428Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.19864ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-861b55a7-d7ac-4486-8979-8c51e4270cae\" ","response":"range_response_count:1 size:3992"}
	{"level":"info","ts":"2023-12-26T21:54:09.684313Z","caller":"traceutil/trace.go:171","msg":"trace[1197988877] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-861b55a7-d7ac-4486-8979-8c51e4270cae; range_end:; response_count:1; response_revision:1460; }","duration":"132.23294ms","start":"2023-12-26T21:54:09.552071Z","end":"2023-12-26T21:54:09.684304Z","steps":["trace[1197988877] 'agreement among raft nodes before linearized reading'  (duration: 132.17184ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:54:09.68443Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.191672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" ","response":"range_response_count:1 size:1412"}
	{"level":"info","ts":"2023-12-26T21:54:09.684452Z","caller":"traceutil/trace.go:171","msg":"trace[1060763607] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1460; }","duration":"221.213972ms","start":"2023-12-26T21:54:09.463232Z","end":"2023-12-26T21:54:09.684446Z","steps":["trace[1060763607] 'agreement among raft nodes before linearized reading'  (duration: 221.168772ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:54:10.219499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.37659ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7686110367698762489 > lease_revoke:<id:6aaa8ca81b21b995>","response":"size:28"}
	{"level":"info","ts":"2023-12-26T21:54:10.219652Z","caller":"traceutil/trace.go:171","msg":"trace[1258233025] linearizableReadLoop","detail":"{readStateIndex:1527; appliedIndex:1526; }","duration":"172.986513ms","start":"2023-12-26T21:54:10.046652Z","end":"2023-12-26T21:54:10.219638Z","steps":["trace[1258233025] 'read index received'  (duration: 17.288721ms)","trace[1258233025] 'applied index is now lower than readState.Index'  (duration: 155.695992ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-26T21:54:10.219762Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.152314ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:1 size:501"}
	{"level":"info","ts":"2023-12-26T21:54:10.219807Z","caller":"traceutil/trace.go:171","msg":"trace[1131632677] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:1; response_revision:1460; }","duration":"173.206614ms","start":"2023-12-26T21:54:10.046593Z","end":"2023-12-26T21:54:10.2198Z","steps":["trace[1131632677] 'agreement among raft nodes before linearized reading'  (duration: 173.084313ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:54:10.761644Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.508619ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7686110367698762491 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1431 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:426 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-26T21:54:10.76172Z","caller":"traceutil/trace.go:171","msg":"trace[864035105] linearizableReadLoop","detail":"{readStateIndex:1528; appliedIndex:1527; }","duration":"318.724935ms","start":"2023-12-26T21:54:10.442983Z","end":"2023-12-26T21:54:10.761708Z","steps":["trace[864035105] 'read index received'  (duration: 159.051715ms)","trace[864035105] 'applied index is now lower than readState.Index'  (duration: 159.67242ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-26T21:54:10.762033Z","caller":"traceutil/trace.go:171","msg":"trace[1662556724] transaction","detail":"{read_only:false; response_revision:1461; number_of_response:1; }","duration":"537.64547ms","start":"2023-12-26T21:54:10.224377Z","end":"2023-12-26T21:54:10.762023Z","steps":["trace[1662556724] 'process raft request'  (duration: 377.705548ms)","trace[1662556724] 'compare'  (duration: 159.290617ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-26T21:54:10.762081Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-26T21:54:10.22436Z","time spent":"537.69437ms","remote":"127.0.0.1:46786","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":485,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1431 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:426 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"warn","ts":"2023-12-26T21:54:10.762299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"319.337739ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" ","response":"range_response_count:1 size:1412"}
	{"level":"info","ts":"2023-12-26T21:54:10.76232Z","caller":"traceutil/trace.go:171","msg":"trace[1020488863] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1461; }","duration":"319.360539ms","start":"2023-12-26T21:54:10.442953Z","end":"2023-12-26T21:54:10.762314Z","steps":["trace[1020488863] 'agreement among raft nodes before linearized reading'  (duration: 319.313938ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:54:10.762336Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-26T21:54:10.442936Z","time spent":"319.395239ms","remote":"127.0.0.1:46760","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":1,"response size":1435,"request content":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" "}
	{"level":"warn","ts":"2023-12-26T21:54:10.762474Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"315.15321ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8554"}
	{"level":"info","ts":"2023-12-26T21:54:10.762492Z","caller":"traceutil/trace.go:171","msg":"trace[2040089331] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1461; }","duration":"315.17211ms","start":"2023-12-26T21:54:10.447315Z","end":"2023-12-26T21:54:10.762487Z","steps":["trace[2040089331] 'agreement among raft nodes before linearized reading'  (duration: 315.10221ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:54:10.762512Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-26T21:54:10.447308Z","time spent":"315.19911ms","remote":"127.0.0.1:46766","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":3,"response size":8577,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2023-12-26T21:54:10.764724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.131995ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-26T21:54:10.764749Z","caller":"traceutil/trace.go:171","msg":"trace[1531575284] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1461; }","duration":"156.160195ms","start":"2023-12-26T21:54:10.608582Z","end":"2023-12-26T21:54:10.764743Z","steps":["trace[1531575284] 'agreement among raft nodes before linearized reading'  (duration: 156.084294ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:54:10.765105Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.5355ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2023-12-26T21:54:10.765133Z","caller":"traceutil/trace.go:171","msg":"trace[266848028] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1461; }","duration":"299.566601ms","start":"2023-12-26T21:54:10.465559Z","end":"2023-12-26T21:54:10.765126Z","steps":["trace[266848028] 'agreement among raft nodes before linearized reading'  (duration: 299.5087ms)"],"step_count":1}
	
	
	==> gcp-auth [ed45968ebc07] <==
	2023/12/26 21:53:35 GCP Auth Webhook started!
	2023/12/26 21:53:47 Ready to marshal response ...
	2023/12/26 21:53:47 Ready to write response ...
	2023/12/26 21:53:53 Ready to marshal response ...
	2023/12/26 21:53:53 Ready to write response ...
	2023/12/26 21:54:00 Ready to marshal response ...
	2023/12/26 21:54:00 Ready to write response ...
	2023/12/26 21:54:00 Ready to marshal response ...
	2023/12/26 21:54:00 Ready to write response ...
	2023/12/26 21:54:27 Ready to marshal response ...
	2023/12/26 21:54:27 Ready to write response ...
	2023/12/26 21:54:32 Ready to marshal response ...
	2023/12/26 21:54:32 Ready to write response ...
	
	
	==> kernel <==
	 21:54:33 up 6 min,  0 users,  load average: 3.56, 2.67, 1.24
	Linux addons-839600 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [32e7bd1104e5] <==
	Trace[1678366715]: [799.50798ms] [799.50798ms] END
	I1226 21:53:29.817625       1 trace.go:236] Trace[356387363]: "List" accept:application/json, */*,audit-id:f9e348e7-99cd-4609-8a9e-a48c298ed126,client:172.21.176.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (26-Dec-2023 21:53:29.120) (total time: 697ms):
	Trace[356387363]: ["List(recursive=true) etcd3" audit-id:f9e348e7-99cd-4609-8a9e-a48c298ed126,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 697ms (21:53:29.120)]
	Trace[356387363]: [697.442793ms] [697.442793ms] END
	I1226 21:53:30.167319       1 trace.go:236] Trace[1029490533]: "Get" accept:application/vnd.kubernetes.protobuf;as=PartialObjectMetadata;g=meta.k8s.io;v=v1,application/json;as=PartialObjectMetadata;g=meta.k8s.io;v=v1,application/json,audit-id:c3867285-e0f4-4a5b-b33c-a7e87ca3c2a8,client:172.21.177.30,protocol:HTTP/2.0,resource:jobs,scope:resource,url:/apis/batch/v1/namespaces/gcp-auth/jobs/gcp-auth-certs-patch,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:generic-garbage-collector,verb:GET (26-Dec-2023 21:53:29.389) (total time: 777ms):
	Trace[1029490533]: ---"About to write a response" 777ms (21:53:30.167)
	Trace[1029490533]: [777.824177ms] [777.824177ms] END
	I1226 21:53:30.167525       1 trace.go:236] Trace[1578488703]: "Get" accept:application/json, */*,audit-id:436551a1-9992-440f-a89d-e40121d1bf66,client:10.244.0.17,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/external-health-monitor-leader-hostpath-csi-k8s-io,user-agent:csi-external-health-monitor-controller/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (26-Dec-2023 21:53:29.147) (total time: 1019ms):
	Trace[1578488703]: ---"About to write a response" 1019ms (21:53:30.167)
	Trace[1578488703]: [1.019973629s] [1.019973629s] END
	I1226 21:53:30.168635       1 trace.go:236] Trace[1968889025]: "Get" accept:application/vnd.kubernetes.protobuf;as=PartialObjectMetadata;g=meta.k8s.io;v=v1,application/json;as=PartialObjectMetadata;g=meta.k8s.io;v=v1,application/json,audit-id:48d05954-aeb6-491c-87cc-01bb80b69132,client:172.21.177.30,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/gcp-auth/pods/gcp-auth-certs-patch-fs6vk,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:generic-garbage-collector,verb:GET (26-Dec-2023 21:53:29.389) (total time: 779ms):
	Trace[1968889025]: ---"About to write a response" 779ms (21:53:30.168)
	Trace[1968889025]: [779.487197ms] [779.487197ms] END
	I1226 21:53:30.174348       1 trace.go:236] Trace[247047658]: "List" accept:application/json, */*,audit-id:cce495b3-63da-4c22-acca-fe563b465e06,client:172.21.176.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (26-Dec-2023 21:53:29.383) (total time: 790ms):
	Trace[247047658]: ["List(recursive=true) etcd3" audit-id:cce495b3-63da-4c22-acca-fe563b465e06,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 790ms (21:53:29.384)]
	Trace[247047658]: [790.369833ms] [790.369833ms] END
	I1226 21:53:58.997635       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1226 21:53:59.136264       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1226 21:54:00.209507       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1226 21:54:10.769349       1 trace.go:236] Trace[86261703]: "Update" accept:application/json, */*,audit-id:37aaff88-7236-4003-902b-6b0de430ae62,client:10.244.0.21,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/ingress-nginx/leases/ingress-nginx-leader,user-agent:nginx-ingress-controller/v1.9.5 (linux/amd64) ingress-nginx/f503c4bb5fa7d857ad29e94970eb550c2bc00b7c,verb:PUT (26-Dec-2023 21:54:10.222) (total time: 546ms):
	Trace[86261703]: ["GuaranteedUpdate etcd3" audit-id:37aaff88-7236-4003-902b-6b0de430ae62,key:/leases/ingress-nginx/ingress-nginx-leader,type:*coordination.Lease,resource:leases.coordination.k8s.io 546ms (21:54:10.222)
	Trace[86261703]:  ---"Txn call completed" 545ms (21:54:10.769)]
	Trace[86261703]: [546.749634ms] [546.749634ms] END
	I1226 21:54:18.676404       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1226 21:54:19.443069       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [9031d364393e] <==
	I1226 21:53:40.652853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="43.685069ms"
	I1226 21:53:40.653583       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="90.401µs"
	I1226 21:53:48.080386       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1226 21:53:52.446739       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1226 21:53:58.820690       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="9.9µs"
	I1226 21:53:59.871600       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	E1226 21:54:00.211963       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	I1226 21:54:00.281613       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W1226 21:54:01.702431       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:54:01.702466       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1226 21:54:03.081872       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W1226 21:54:04.072260       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:54:04.072297       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:54:09.059361       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:54:09.059404       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1226 21:54:09.076498       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I1226 21:54:10.800811       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="9.5µs"
	W1226 21:54:18.359465       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:54:18.359501       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1226 21:54:18.519261       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1226 21:54:18.519367       1 shared_informer.go:318] Caches are synced for resource quota
	I1226 21:54:18.829615       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1226 21:54:18.830206       1 shared_informer.go:318] Caches are synced for garbage collector
	I1226 21:54:21.833959       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1226 21:54:25.990955       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	
	==> kube-proxy [f19cbe0604c5] <==
	I1226 21:50:32.056056       1 server_others.go:69] "Using iptables proxy"
	I1226 21:50:32.355380       1 node.go:141] Successfully retrieved node IP: 172.21.177.30
	I1226 21:50:32.712282       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1226 21:50:32.723432       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1226 21:50:32.741281       1 server_others.go:152] "Using iptables Proxier"
	I1226 21:50:32.741519       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1226 21:50:32.743269       1 server.go:846] "Version info" version="v1.28.4"
	I1226 21:50:32.743902       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1226 21:50:32.794318       1 config.go:188] "Starting service config controller"
	I1226 21:50:32.794552       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1226 21:50:32.806161       1 config.go:97] "Starting endpoint slice config controller"
	I1226 21:50:32.806189       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1226 21:50:32.806494       1 config.go:315] "Starting node config controller"
	I1226 21:50:32.806515       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1226 21:50:32.910075       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1226 21:50:32.910180       1 shared_informer.go:318] Caches are synced for service config
	I1226 21:50:32.910755       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [adff06e2f005] <==
	W1226 21:50:03.227806       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1226 21:50:03.227984       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1226 21:50:03.297480       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1226 21:50:03.297516       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1226 21:50:03.449272       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1226 21:50:03.449325       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1226 21:50:03.562102       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1226 21:50:03.562139       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1226 21:50:03.748359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1226 21:50:03.748483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1226 21:50:03.757663       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1226 21:50:03.757713       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1226 21:50:03.806445       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1226 21:50:03.806704       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1226 21:50:03.812347       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1226 21:50:03.812504       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1226 21:50:03.820017       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1226 21:50:03.820058       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1226 21:50:03.837031       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1226 21:50:03.837096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1226 21:50:03.859197       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1226 21:50:03.859243       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1226 21:50:03.906734       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1226 21:50:03.906818       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1226 21:50:06.700229       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-26 21:48:08 UTC, ends at Tue 2023-12-26 21:54:34 UTC. --
	Dec 26 21:54:27 addons-839600 kubelet[2687]: I1226 21:54:27.527334    2687 memory_manager.go:346] "RemoveStaleState removing state" podUID="d30f082f-0fb6-46ee-8808-b4b64e0c6459" containerName="busybox"
	Dec 26 21:54:27 addons-839600 kubelet[2687]: I1226 21:54:27.527345    2687 memory_manager.go:346] "RemoveStaleState removing state" podUID="1676afa0-b387-4097-b097-3b9dafada9ad" containerName="task-pv-container"
	Dec 26 21:54:27 addons-839600 kubelet[2687]: I1226 21:54:27.559601    2687 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-2fkmh" secret="" err="secret \"gcp-auth\" not found"
	Dec 26 21:54:27 addons-839600 kubelet[2687]: I1226 21:54:27.628035    2687 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/888759ef-9dea-4a06-ac1d-a1a36b93e280-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"888759ef-9dea-4a06-ac1d-a1a36b93e280\") " pod="default/task-pv-pod-restore"
	Dec 26 21:54:27 addons-839600 kubelet[2687]: I1226 21:54:27.628275    2687 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a51a215a-5746-4bad-aec7-236c901a9ae6\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^56235e64-a439-11ee-8951-4e28093f03d8\") pod \"task-pv-pod-restore\" (UID: \"888759ef-9dea-4a06-ac1d-a1a36b93e280\") " pod="default/task-pv-pod-restore"
	Dec 26 21:54:27 addons-839600 kubelet[2687]: I1226 21:54:27.628348    2687 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46h2t\" (UniqueName: \"kubernetes.io/projected/888759ef-9dea-4a06-ac1d-a1a36b93e280-kube-api-access-46h2t\") pod \"task-pv-pod-restore\" (UID: \"888759ef-9dea-4a06-ac1d-a1a36b93e280\") " pod="default/task-pv-pod-restore"
	Dec 26 21:54:27 addons-839600 kubelet[2687]: I1226 21:54:27.749011    2687 operation_generator.go:665] "MountVolume.MountDevice succeeded for volume \"pvc-a51a215a-5746-4bad-aec7-236c901a9ae6\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^56235e64-a439-11ee-8951-4e28093f03d8\") pod \"task-pv-pod-restore\" (UID: \"888759ef-9dea-4a06-ac1d-a1a36b93e280\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/e601ce8a84e2a44b9c197c0237965dec51f335891c0f206374c276ff4bc14069/globalmount\"" pod="default/task-pv-pod-restore"
	Dec 26 21:54:28 addons-839600 kubelet[2687]: I1226 21:54:28.838420    2687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e88e3628fc91af30a54772318f9b7b5a23ec5bf3297a65373601588381e1afc9"
	Dec 26 21:54:29 addons-839600 kubelet[2687]: I1226 21:54:29.943242    2687 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=2.399126136 podCreationTimestamp="2023-12-26 21:54:27 +0000 UTC" firstStartedPulling="2023-12-26 21:54:28.914926614 +0000 UTC m=+262.722340508" lastFinishedPulling="2023-12-26 21:54:29.458995867 +0000 UTC m=+263.266409861" observedRunningTime="2023-12-26 21:54:29.942115683 +0000 UTC m=+263.749529677" watchObservedRunningTime="2023-12-26 21:54:29.943195489 +0000 UTC m=+263.750609383"
	Dec 26 21:54:31 addons-839600 kubelet[2687]: I1226 21:54:31.040914    2687 scope.go:117] "RemoveContainer" containerID="68c54007e882ee2174139b8f4d3fb28d9697c540e999e24d73acd335f4338102"
	Dec 26 21:54:31 addons-839600 kubelet[2687]: I1226 21:54:31.162058    2687 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/cc4602ed-0428-4409-a78b-30d70e22826f-device-plugin\") pod \"cc4602ed-0428-4409-a78b-30d70e22826f\" (UID: \"cc4602ed-0428-4409-a78b-30d70e22826f\") "
	Dec 26 21:54:31 addons-839600 kubelet[2687]: I1226 21:54:31.162222    2687 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8p8b\" (UniqueName: \"kubernetes.io/projected/cc4602ed-0428-4409-a78b-30d70e22826f-kube-api-access-s8p8b\") pod \"cc4602ed-0428-4409-a78b-30d70e22826f\" (UID: \"cc4602ed-0428-4409-a78b-30d70e22826f\") "
	Dec 26 21:54:31 addons-839600 kubelet[2687]: I1226 21:54:31.162605    2687 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc4602ed-0428-4409-a78b-30d70e22826f-device-plugin" (OuterVolumeSpecName: "device-plugin") pod "cc4602ed-0428-4409-a78b-30d70e22826f" (UID: "cc4602ed-0428-4409-a78b-30d70e22826f"). InnerVolumeSpecName "device-plugin". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Dec 26 21:54:31 addons-839600 kubelet[2687]: I1226 21:54:31.174020    2687 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc4602ed-0428-4409-a78b-30d70e22826f-kube-api-access-s8p8b" (OuterVolumeSpecName: "kube-api-access-s8p8b") pod "cc4602ed-0428-4409-a78b-30d70e22826f" (UID: "cc4602ed-0428-4409-a78b-30d70e22826f"). InnerVolumeSpecName "kube-api-access-s8p8b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 26 21:54:31 addons-839600 kubelet[2687]: I1226 21:54:31.263023    2687 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-s8p8b\" (UniqueName: \"kubernetes.io/projected/cc4602ed-0428-4409-a78b-30d70e22826f-kube-api-access-s8p8b\") on node \"addons-839600\" DevicePath \"\""
	Dec 26 21:54:31 addons-839600 kubelet[2687]: I1226 21:54:31.263823    2687 reconciler_common.go:300] "Volume detached for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/cc4602ed-0428-4409-a78b-30d70e22826f-device-plugin\") on node \"addons-839600\" DevicePath \"\""
	Dec 26 21:54:32 addons-839600 kubelet[2687]: I1226 21:54:32.582770    2687 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cc4602ed-0428-4409-a78b-30d70e22826f" path="/var/lib/kubelet/pods/cc4602ed-0428-4409-a78b-30d70e22826f/volumes"
	Dec 26 21:54:32 addons-839600 kubelet[2687]: I1226 21:54:32.583612    2687 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d30f082f-0fb6-46ee-8808-b4b64e0c6459" path="/var/lib/kubelet/pods/d30f082f-0fb6-46ee-8808-b4b64e0c6459/volumes"
	Dec 26 21:54:32 addons-839600 kubelet[2687]: I1226 21:54:32.736222    2687 topology_manager.go:215] "Topology Admit Handler" podUID="7ffe77b8-101f-4bec-b9dd-b06461de1991" podNamespace="local-path-storage" podName="helper-pod-delete-pvc-861b55a7-d7ac-4486-8979-8c51e4270cae"
	Dec 26 21:54:32 addons-839600 kubelet[2687]: E1226 21:54:32.736312    2687 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cc4602ed-0428-4409-a78b-30d70e22826f" containerName="nvidia-device-plugin-ctr"
	Dec 26 21:54:32 addons-839600 kubelet[2687]: I1226 21:54:32.736360    2687 memory_manager.go:346] "RemoveStaleState removing state" podUID="cc4602ed-0428-4409-a78b-30d70e22826f" containerName="nvidia-device-plugin-ctr"
	Dec 26 21:54:32 addons-839600 kubelet[2687]: I1226 21:54:32.876668    2687 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/7ffe77b8-101f-4bec-b9dd-b06461de1991-script\") pod \"helper-pod-delete-pvc-861b55a7-d7ac-4486-8979-8c51e4270cae\" (UID: \"7ffe77b8-101f-4bec-b9dd-b06461de1991\") " pod="local-path-storage/helper-pod-delete-pvc-861b55a7-d7ac-4486-8979-8c51e4270cae"
	Dec 26 21:54:32 addons-839600 kubelet[2687]: I1226 21:54:32.877079    2687 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shx6n\" (UniqueName: \"kubernetes.io/projected/7ffe77b8-101f-4bec-b9dd-b06461de1991-kube-api-access-shx6n\") pod \"helper-pod-delete-pvc-861b55a7-d7ac-4486-8979-8c51e4270cae\" (UID: \"7ffe77b8-101f-4bec-b9dd-b06461de1991\") " pod="local-path-storage/helper-pod-delete-pvc-861b55a7-d7ac-4486-8979-8c51e4270cae"
	Dec 26 21:54:32 addons-839600 kubelet[2687]: I1226 21:54:32.877266    2687 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/7ffe77b8-101f-4bec-b9dd-b06461de1991-data\") pod \"helper-pod-delete-pvc-861b55a7-d7ac-4486-8979-8c51e4270cae\" (UID: \"7ffe77b8-101f-4bec-b9dd-b06461de1991\") " pod="local-path-storage/helper-pod-delete-pvc-861b55a7-d7ac-4486-8979-8c51e4270cae"
	Dec 26 21:54:32 addons-839600 kubelet[2687]: I1226 21:54:32.877405    2687 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7ffe77b8-101f-4bec-b9dd-b06461de1991-gcp-creds\") pod \"helper-pod-delete-pvc-861b55a7-d7ac-4486-8979-8c51e4270cae\" (UID: \"7ffe77b8-101f-4bec-b9dd-b06461de1991\") " pod="local-path-storage/helper-pod-delete-pvc-861b55a7-d7ac-4486-8979-8c51e4270cae"
	
	
	==> storage-provisioner [627e39e69e6e] <==
	I1226 21:50:53.350951       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1226 21:50:53.381531       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1226 21:50:53.384949       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1226 21:50:53.420983       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1226 21:50:53.426045       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d009f049-6f42-444d-8ac8-27181dde2820", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-839600_ea3b487b-757e-4744-9851-e5391aa1b192 became leader
	I1226 21:50:53.426107       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-839600_ea3b487b-757e-4744-9851-e5391aa1b192!
	I1226 21:50:53.526556       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-839600_ea3b487b-757e-4744-9851-e5391aa1b192!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 21:54:24.419276    7272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-839600 -n addons-839600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-839600 -n addons-839600: (14.1315072s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-839600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-txmg5 ingress-nginx-admission-patch-v2jgj
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-839600 describe pod ingress-nginx-admission-create-txmg5 ingress-nginx-admission-patch-v2jgj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-839600 describe pod ingress-nginx-admission-create-txmg5 ingress-nginx-admission-patch-v2jgj: exit status 1 (242.1081ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-txmg5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-v2jgj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-839600 describe pod ingress-nginx-admission-create-txmg5 ingress-nginx-admission-patch-v2jgj: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.00s)

                                                
                                    
x
+
TestErrorSpam/setup (194.52s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-211800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 --driver=hyperv
E1226 21:58:36.127654   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 21:58:36.142689   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 21:58:36.157773   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 21:58:36.189620   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 21:58:36.237669   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 21:58:36.332680   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 21:58:36.506593   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 21:58:36.838491   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 21:58:37.492678   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 21:58:38.781966   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 21:58:41.348942   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 21:58:46.485314   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 21:58:56.733795   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 21:59:17.216270   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 21:59:58.179559   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 22:01:20.111489   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-211800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 --driver=hyperv: (3m14.5208215s)
error_spam_test.go:96: unexpected stderr: "W1226 21:58:11.823863   11916 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-211800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
- KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
- MINIKUBE_LOCATION=17857
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting control plane node nospam-211800 in cluster nospam-211800
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-211800" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W1226 21:58:11.823863   11916 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (194.52s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-796600 config unset cpus" to be -""- but got *"W1226 22:13:59.200429    1136 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-796600 config get cpus: exit status 14 (356.0215ms)

                                                
                                                
** stderr ** 
	W1226 22:13:59.650185    9228 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-796600 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W1226 22:13:59.650185    9228 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-796600 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W1226 22:14:00.125638    3728 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-796600 config get cpus" to be -""- but got *"W1226 22:14:00.522247    2576 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-796600 config unset cpus" to be -""- but got *"W1226 22:14:00.913803    8540 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-796600 config get cpus: exit status 14 (328.624ms)

                                                
                                                
** stderr ** 
	W1226 22:14:01.317108    2344 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-796600 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W1226 22:14:01.317108    2344 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-796600 service --namespace=default --https --url hello-node: exit status 1 (15.0520395s)

                                                
                                                
** stderr ** 
	W1226 22:15:17.201149    7540 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1510: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-796600 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-796600 service hello-node --url --format={{.IP}}: exit status 1 (15.0489971s)

                                                
                                                
** stderr ** 
	W1226 22:15:32.246012   13544 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1541: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-796600 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1547: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-796600 service hello-node --url: exit status 1 (15.0604788s)

                                                
                                                
** stderr ** 
	W1226 22:15:47.277951    1936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1560: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-796600 service hello-node --url": exit status 1
functional_test.go:1564: found endpoint for hello-node: 
functional_test.go:1572: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.06s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (57.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- exec busybox-5bc68d56bd-bskhd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- exec busybox-5bc68d56bd-bskhd -- sh -c "ping -c 1 172.21.176.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- exec busybox-5bc68d56bd-bskhd -- sh -c "ping -c 1 172.21.176.1": exit status 1 (10.5152334s)

                                                
                                                
-- stdout --
	PING 172.21.176.1 (172.21.176.1): 56 data bytes
	
	--- 172.21.176.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 23:02:31.493440    2912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (172.21.176.1) from pod (busybox-5bc68d56bd-bskhd): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- exec busybox-5bc68d56bd-flvvn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- exec busybox-5bc68d56bd-flvvn -- sh -c "ping -c 1 172.21.176.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- exec busybox-5bc68d56bd-flvvn -- sh -c "ping -c 1 172.21.176.1": exit status 1 (10.5488857s)

                                                
                                                
-- stdout --
	PING 172.21.176.1 (172.21.176.1): 56 data bytes
	
	--- 172.21.176.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 23:02:42.562625    5608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (172.21.176.1) from pod (busybox-5bc68d56bd-flvvn): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-455300 -n multinode-455300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-455300 -n multinode-455300: (12.2439696s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 logs -n 25: (8.6673073s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-421200 ssh -- ls                    | mount-start-2-421200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 22:51 UTC | 26 Dec 23 22:51 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-421200                           | mount-start-1-421200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 22:51 UTC | 26 Dec 23 22:52 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-421200 ssh -- ls                    | mount-start-2-421200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 22:52 UTC | 26 Dec 23 22:52 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-421200                           | mount-start-2-421200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 22:52 UTC | 26 Dec 23 22:52 UTC |
	| start   | -p mount-start-2-421200                           | mount-start-2-421200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 22:52 UTC | 26 Dec 23 22:54 UTC |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host         | mount-start-2-421200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 22:54 UTC |                     |
	|         | --profile mount-start-2-421200 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-421200 ssh -- ls                    | mount-start-2-421200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 22:54 UTC | 26 Dec 23 22:54 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-421200                           | mount-start-2-421200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 22:54 UTC | 26 Dec 23 22:55 UTC |
	| delete  | -p mount-start-1-421200                           | mount-start-1-421200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 22:55 UTC | 26 Dec 23 22:55 UTC |
	| start   | -p multinode-455300                               | multinode-455300     | minikube1\jenkins | v1.32.0 | 26 Dec 23 22:55 UTC | 26 Dec 23 23:01 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-455300 -- apply -f                   | multinode-455300     | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:02 UTC | 26 Dec 23 23:02 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-455300 -- rollout                    | multinode-455300     | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:02 UTC | 26 Dec 23 23:02 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-455300 -- get pods -o                | multinode-455300     | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:02 UTC | 26 Dec 23 23:02 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-455300 -- get pods -o                | multinode-455300     | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:02 UTC | 26 Dec 23 23:02 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-455300 -- exec                       | multinode-455300     | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:02 UTC | 26 Dec 23 23:02 UTC |
	|         | busybox-5bc68d56bd-bskhd --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-455300 -- exec                       | multinode-455300     | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:02 UTC | 26 Dec 23 23:02 UTC |
	|         | busybox-5bc68d56bd-flvvn --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-455300 -- exec                       | multinode-455300     | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:02 UTC | 26 Dec 23 23:02 UTC |
	|         | busybox-5bc68d56bd-bskhd --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-455300 -- exec                       | multinode-455300     | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:02 UTC | 26 Dec 23 23:02 UTC |
	|         | busybox-5bc68d56bd-flvvn --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-455300 -- exec                       | multinode-455300     | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:02 UTC | 26 Dec 23 23:02 UTC |
	|         | busybox-5bc68d56bd-bskhd -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-455300 -- exec                       | multinode-455300     | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:02 UTC | 26 Dec 23 23:02 UTC |
	|         | busybox-5bc68d56bd-flvvn -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-455300 -- get pods -o                | multinode-455300     | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:02 UTC | 26 Dec 23 23:02 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-455300 -- exec                       | multinode-455300     | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:02 UTC | 26 Dec 23 23:02 UTC |
	|         | busybox-5bc68d56bd-bskhd                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-455300 -- exec                       | multinode-455300     | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:02 UTC |                     |
	|         | busybox-5bc68d56bd-bskhd -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.21.176.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-455300 -- exec                       | multinode-455300     | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:02 UTC | 26 Dec 23 23:02 UTC |
	|         | busybox-5bc68d56bd-flvvn                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-455300 -- exec                       | multinode-455300     | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:02 UTC |                     |
	|         | busybox-5bc68d56bd-flvvn -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.21.176.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 22:55:14
	Running on machine: minikube1
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 22:55:14.362797    4740 out.go:296] Setting OutFile to fd 1164 ...
	I1226 22:55:14.362797    4740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:55:14.362797    4740 out.go:309] Setting ErrFile to fd 1252...
	I1226 22:55:14.362797    4740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:55:14.386884    4740 out.go:303] Setting JSON to false
	I1226 22:55:14.390866    4740 start.go:128] hostinfo: {"hostname":"minikube1","uptime":5713,"bootTime":1703625601,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1226 22:55:14.391534    4740 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 22:55:14.397041    4740 out.go:177] * [multinode-455300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1226 22:55:14.402014    4740 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 22:55:14.401881    4740 notify.go:220] Checking for updates...
	I1226 22:55:14.405145    4740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 22:55:14.407611    4740 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1226 22:55:14.411240    4740 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 22:55:14.413719    4740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 22:55:14.417290    4740 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 22:55:20.004146    4740 out.go:177] * Using the hyperv driver based on user configuration
	I1226 22:55:20.007074    4740 start.go:298] selected driver: hyperv
	I1226 22:55:20.007074    4740 start.go:902] validating driver "hyperv" against <nil>
	I1226 22:55:20.007074    4740 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 22:55:20.058940    4740 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 22:55:20.061959    4740 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 22:55:20.062166    4740 cni.go:84] Creating CNI manager for ""
	I1226 22:55:20.062208    4740 cni.go:136] 0 nodes found, recommending kindnet
	I1226 22:55:20.062208    4740 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1226 22:55:20.062261    4740 start_flags.go:323] config:
	{Name:multinode-455300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-455300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 22:55:20.062261    4740 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:55:20.066447    4740 out.go:177] * Starting control plane node multinode-455300 in cluster multinode-455300
	I1226 22:55:20.070111    4740 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 22:55:20.070258    4740 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1226 22:55:20.070258    4740 cache.go:56] Caching tarball of preloaded images
	I1226 22:55:20.070258    4740 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1226 22:55:20.071236    4740 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1226 22:55:20.071709    4740 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 22:55:20.071709    4740 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json: {Name:mkbd1e2f9913ac638dbb93b0001354a656b2552f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:55:20.072915    4740 start.go:365] acquiring machines lock for multinode-455300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1226 22:55:20.072915    4740 start.go:369] acquired machines lock for "multinode-455300" in 0s
	I1226 22:55:20.072915    4740 start.go:93] Provisioning new machine with config: &{Name:multinode-455300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-455300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1226 22:55:20.074348    4740 start.go:125] createHost starting for "" (driver="hyperv")
	I1226 22:55:20.078152    4740 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1226 22:55:20.078582    4740 start.go:159] libmachine.API.Create for "multinode-455300" (driver="hyperv")
	I1226 22:55:20.078673    4740 client.go:168] LocalClient.Create starting
	I1226 22:55:20.079455    4740 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1226 22:55:20.079793    4740 main.go:141] libmachine: Decoding PEM data...
	I1226 22:55:20.079888    4740 main.go:141] libmachine: Parsing certificate...
	I1226 22:55:20.080161    4740 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1226 22:55:20.080329    4740 main.go:141] libmachine: Decoding PEM data...
	I1226 22:55:20.080329    4740 main.go:141] libmachine: Parsing certificate...
	I1226 22:55:20.080329    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1226 22:55:22.276278    4740 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1226 22:55:22.276494    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:55:22.276579    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1226 22:55:24.123957    4740 main.go:141] libmachine: [stdout =====>] : False
	
	I1226 22:55:24.123957    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:55:24.124250    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1226 22:55:25.685465    4740 main.go:141] libmachine: [stdout =====>] : True
	
	I1226 22:55:25.685557    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:55:25.685630    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1226 22:55:29.328633    4740 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1226 22:55:29.328686    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:55:29.331670    4740 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I1226 22:55:29.843864    4740 main.go:141] libmachine: Creating SSH key...
	I1226 22:55:30.227409    4740 main.go:141] libmachine: Creating VM...
	I1226 22:55:30.228412    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1226 22:55:33.090865    4740 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1226 22:55:33.091202    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:55:33.091308    4740 main.go:141] libmachine: Using switch "Default Switch"
	I1226 22:55:33.091395    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1226 22:55:34.889529    4740 main.go:141] libmachine: [stdout =====>] : True
	
	I1226 22:55:34.889529    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:55:34.889740    4740 main.go:141] libmachine: Creating VHD
	I1226 22:55:34.889740    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\fixed.vhd' -SizeBytes 10MB -Fixed
	I1226 22:55:38.531542    4740 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B20B4F83-1275-4BB9-B915-7AA8605EB0FE
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1226 22:55:38.538110    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:55:38.538110    4740 main.go:141] libmachine: Writing magic tar header
	I1226 22:55:38.538110    4740 main.go:141] libmachine: Writing SSH key tar header
	I1226 22:55:38.546846    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\disk.vhd' -VHDType Dynamic -DeleteSource
	I1226 22:55:41.687766    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:55:41.688021    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:55:41.688021    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\disk.vhd' -SizeBytes 20000MB
	I1226 22:55:44.269954    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:55:44.269954    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:55:44.270036    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-455300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1226 22:55:47.973677    4740 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-455300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1226 22:55:47.973883    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:55:47.973883    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-455300 -DynamicMemoryEnabled $false
	I1226 22:55:50.219642    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:55:50.220051    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:55:50.220051    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-455300 -Count 2
	I1226 22:55:52.398973    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:55:52.399148    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:55:52.399148    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-455300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\boot2docker.iso'
	I1226 22:55:54.969822    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:55:54.969822    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:55:54.969822    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-455300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\disk.vhd'
	I1226 22:55:57.655990    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:55:57.655990    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:55:57.656189    4740 main.go:141] libmachine: Starting VM...
	I1226 22:55:57.656189    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-455300
	I1226 22:56:00.730550    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:56:00.730550    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:00.730550    4740 main.go:141] libmachine: Waiting for host to start...
	I1226 22:56:00.730674    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:56:03.018266    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:56:03.018266    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:03.018382    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:56:05.557350    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:56:05.557388    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:06.559936    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:56:08.793419    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:56:08.793524    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:08.793524    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:56:11.349302    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:56:11.349542    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:12.363800    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:56:14.604255    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:56:14.604255    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:14.604255    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:56:17.109266    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:56:17.109266    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:18.112520    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:56:20.363704    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:56:20.363916    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:20.363977    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:56:22.977070    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:56:22.977351    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:23.993175    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:56:26.298411    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:56:26.298620    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:26.298662    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:56:28.956809    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:56:28.956970    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:28.956970    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:56:31.108140    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:56:31.108140    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:31.108140    4740 machine.go:88] provisioning docker machine ...
	I1226 22:56:31.108140    4740 buildroot.go:166] provisioning hostname "multinode-455300"
	I1226 22:56:31.108140    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:56:33.319769    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:56:33.320169    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:33.320260    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:56:35.883422    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:56:35.883422    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:35.889724    4740 main.go:141] libmachine: Using SSH client type: native
	I1226 22:56:35.899839    4740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.4 22 <nil> <nil>}
	I1226 22:56:35.899839    4740 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-455300 && echo "multinode-455300" | sudo tee /etc/hostname
	I1226 22:56:36.051359    4740 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-455300
	
	I1226 22:56:36.051503    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:56:38.179111    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:56:38.179367    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:38.179513    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:56:40.734647    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:56:40.734647    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:40.741025    4740 main.go:141] libmachine: Using SSH client type: native
	I1226 22:56:40.741816    4740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.4 22 <nil> <nil>}
	I1226 22:56:40.741816    4740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-455300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-455300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-455300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 22:56:40.878346    4740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 22:56:40.878346    4740 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1226 22:56:40.878346    4740 buildroot.go:174] setting up certificates
	I1226 22:56:40.878346    4740 provision.go:83] configureAuth start
	I1226 22:56:40.878958    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:56:43.001118    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:56:43.001303    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:43.001367    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:56:45.548706    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:56:45.548915    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:45.549017    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:56:47.663417    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:56:47.663745    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:47.663745    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:56:50.178973    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:56:50.178973    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:50.179049    4740 provision.go:138] copyHostCerts
	I1226 22:56:50.179226    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1226 22:56:50.179655    4740 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1226 22:56:50.179655    4740 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1226 22:56:50.180239    4740 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1226 22:56:50.181676    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1226 22:56:50.181892    4740 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1226 22:56:50.181977    4740 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1226 22:56:50.182412    4740 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1226 22:56:50.183744    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1226 22:56:50.184043    4740 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1226 22:56:50.184043    4740 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1226 22:56:50.184609    4740 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I1226 22:56:50.185528    4740 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-455300 san=[172.21.184.4 172.21.184.4 localhost 127.0.0.1 minikube multinode-455300]
	I1226 22:56:50.317578    4740 provision.go:172] copyRemoteCerts
	I1226 22:56:50.333728    4740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 22:56:50.333728    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:56:52.440007    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:56:52.440007    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:52.440387    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:56:54.983286    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:56:54.983484    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:54.983681    4740 sshutil.go:53] new ssh client: &{IP:172.21.184.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 22:56:55.093126    4740 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7593424s)
	I1226 22:56:55.093256    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1226 22:56:55.093973    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1226 22:56:55.137725    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1226 22:56:55.138259    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 22:56:55.179733    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1226 22:56:55.180279    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I1226 22:56:55.221315    4740 provision.go:86] duration metric: configureAuth took 14.342858s
	I1226 22:56:55.221413    4740 buildroot.go:189] setting minikube options for container-runtime
	I1226 22:56:55.221699    4740 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 22:56:55.221699    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:56:57.338362    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:56:57.338362    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:57.338362    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:56:59.837299    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:56:59.837544    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:56:59.843127    4740 main.go:141] libmachine: Using SSH client type: native
	I1226 22:56:59.843932    4740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.4 22 <nil> <nil>}
	I1226 22:56:59.843932    4740 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1226 22:56:59.968768    4740 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1226 22:56:59.968768    4740 buildroot.go:70] root file system type: tmpfs
	I1226 22:56:59.969765    4740 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1226 22:56:59.969859    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:57:02.110026    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:57:02.110217    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:02.110217    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:57:04.595970    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:57:04.595970    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:04.602108    4740 main.go:141] libmachine: Using SSH client type: native
	I1226 22:57:04.602907    4740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.4 22 <nil> <nil>}
	I1226 22:57:04.602907    4740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1226 22:57:04.748258    4740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1226 22:57:04.748258    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:57:06.881885    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:57:06.881885    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:06.881979    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:57:09.446280    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:57:09.446345    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:09.451222    4740 main.go:141] libmachine: Using SSH client type: native
	I1226 22:57:09.452166    4740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.4 22 <nil> <nil>}
	I1226 22:57:09.452166    4740 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1226 22:57:10.568107    4740 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1226 22:57:10.568175    4740 machine.go:91] provisioned docker machine in 39.4600398s
	I1226 22:57:10.568175    4740 client.go:171] LocalClient.Create took 1m50.4895193s
	I1226 22:57:10.568238    4740 start.go:167] duration metric: libmachine.API.Create for "multinode-455300" took 1m50.4896727s
	I1226 22:57:10.568238    4740 start.go:300] post-start starting for "multinode-455300" (driver="hyperv")
	I1226 22:57:10.568238    4740 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 22:57:10.581333    4740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 22:57:10.581333    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:57:12.706311    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:57:12.706311    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:12.706311    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:57:15.291590    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:57:15.291590    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:15.291983    4740 sshutil.go:53] new ssh client: &{IP:172.21.184.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 22:57:15.400428    4740 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8190247s)
	I1226 22:57:15.413742    4740 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 22:57:15.421153    4740 command_runner.go:130] > NAME=Buildroot
	I1226 22:57:15.421153    4740 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I1226 22:57:15.421153    4740 command_runner.go:130] > ID=buildroot
	I1226 22:57:15.421153    4740 command_runner.go:130] > VERSION_ID=2021.02.12
	I1226 22:57:15.421153    4740 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1226 22:57:15.421290    4740 info.go:137] Remote host: Buildroot 2021.02.12
	I1226 22:57:15.421390    4740 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1226 22:57:15.422742    4740 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1226 22:57:15.423856    4740 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> 107282.pem in /etc/ssl/certs
	I1226 22:57:15.423856    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> /etc/ssl/certs/107282.pem
	I1226 22:57:15.435617    4740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 22:57:15.457057    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /etc/ssl/certs/107282.pem (1708 bytes)
	I1226 22:57:15.497069    4740 start.go:303] post-start completed in 4.9288313s
	I1226 22:57:15.500220    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:57:17.661775    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:57:17.661775    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:17.661905    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:57:20.194443    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:57:20.194729    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:20.194985    4740 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 22:57:20.197991    4740 start.go:128] duration metric: createHost completed in 2m0.1236608s
	I1226 22:57:20.198085    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:57:22.377913    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:57:22.378076    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:22.378076    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:57:24.924120    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:57:24.924419    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:24.931390    4740 main.go:141] libmachine: Using SSH client type: native
	I1226 22:57:24.932094    4740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.4 22 <nil> <nil>}
	I1226 22:57:24.932094    4740 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1226 22:57:25.058055    4740 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703631445.050403501
	
	I1226 22:57:25.058269    4740 fix.go:206] guest clock: 1703631445.050403501
	I1226 22:57:25.058388    4740 fix.go:219] Guest: 2023-12-26 22:57:25.050403501 +0000 UTC Remote: 2023-12-26 22:57:20.1980856 +0000 UTC m=+126.008137501 (delta=4.852317901s)
	I1226 22:57:25.058463    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:57:27.214508    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:57:27.214577    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:27.214577    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:57:29.750706    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:57:29.750958    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:29.756631    4740 main.go:141] libmachine: Using SSH client type: native
	I1226 22:57:29.757345    4740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.4 22 <nil> <nil>}
	I1226 22:57:29.757345    4740 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1703631445
	I1226 22:57:29.907049    4740 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 26 22:57:25 UTC 2023
	
	I1226 22:57:29.907049    4740 fix.go:226] clock set: Tue Dec 26 22:57:25 UTC 2023
	 (err=<nil>)
	I1226 22:57:29.907049    4740 start.go:83] releasing machines lock for "multinode-455300", held for 2m9.8341517s
	I1226 22:57:29.907671    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:57:32.055247    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:57:32.055247    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:32.055247    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:57:34.589011    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:57:34.589011    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:34.593803    4740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 22:57:34.593902    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:57:34.605606    4740 ssh_runner.go:195] Run: cat /version.json
	I1226 22:57:34.605606    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:57:36.767437    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:57:36.767437    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:36.767437    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:57:36.767437    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:36.767437    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:57:36.767750    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:57:39.432092    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:57:39.432595    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:39.432685    4740 sshutil.go:53] new ssh client: &{IP:172.21.184.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 22:57:39.450361    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:57:39.450361    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:57:39.450906    4740 sshutil.go:53] new ssh client: &{IP:172.21.184.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 22:57:39.611405    4740 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1226 22:57:39.611405    4740 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0176029s)
	I1226 22:57:39.611405    4740 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702660877-17806", "minikube_version": "v1.32.0", "commit": "957da21b08687cca2533dd65b67e68ead277b79e"}
	I1226 22:57:39.611405    4740 ssh_runner.go:235] Completed: cat /version.json: (5.0057996s)
	I1226 22:57:39.625852    4740 ssh_runner.go:195] Run: systemctl --version
	I1226 22:57:39.635474    4740 command_runner.go:130] > systemd 247 (247)
	I1226 22:57:39.635474    4740 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1226 22:57:39.648547    4740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 22:57:39.657459    4740 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1226 22:57:39.657679    4740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1226 22:57:39.670035    4740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:57:39.692435    4740 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1226 22:57:39.692905    4740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1226 22:57:39.692905    4740 start.go:475] detecting cgroup driver to use...
	I1226 22:57:39.693252    4740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 22:57:39.721971    4740 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1226 22:57:39.736264    4740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1226 22:57:39.770911    4740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1226 22:57:39.787524    4740 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1226 22:57:39.800231    4740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1226 22:57:39.827639    4740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 22:57:39.858422    4740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1226 22:57:39.887827    4740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 22:57:39.918194    4740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 22:57:39.946938    4740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1226 22:57:39.978417    4740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 22:57:39.992979    4740 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1226 22:57:40.008833    4740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 22:57:40.039699    4740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 22:57:40.210003    4740 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1226 22:57:40.237831    4740 start.go:475] detecting cgroup driver to use...
	I1226 22:57:40.251625    4740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1226 22:57:40.274773    4740 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1226 22:57:40.274773    4740 command_runner.go:130] > [Unit]
	I1226 22:57:40.274773    4740 command_runner.go:130] > Description=Docker Application Container Engine
	I1226 22:57:40.274773    4740 command_runner.go:130] > Documentation=https://docs.docker.com
	I1226 22:57:40.274773    4740 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1226 22:57:40.274773    4740 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1226 22:57:40.274773    4740 command_runner.go:130] > StartLimitBurst=3
	I1226 22:57:40.274896    4740 command_runner.go:130] > StartLimitIntervalSec=60
	I1226 22:57:40.274896    4740 command_runner.go:130] > [Service]
	I1226 22:57:40.274896    4740 command_runner.go:130] > Type=notify
	I1226 22:57:40.274896    4740 command_runner.go:130] > Restart=on-failure
	I1226 22:57:40.274896    4740 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1226 22:57:40.274896    4740 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1226 22:57:40.274896    4740 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1226 22:57:40.275002    4740 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1226 22:57:40.275002    4740 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1226 22:57:40.275002    4740 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1226 22:57:40.275002    4740 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1226 22:57:40.275074    4740 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1226 22:57:40.275124    4740 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1226 22:57:40.275124    4740 command_runner.go:130] > ExecStart=
	I1226 22:57:40.275124    4740 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1226 22:57:40.275124    4740 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1226 22:57:40.275124    4740 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1226 22:57:40.275216    4740 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1226 22:57:40.275216    4740 command_runner.go:130] > LimitNOFILE=infinity
	I1226 22:57:40.275216    4740 command_runner.go:130] > LimitNPROC=infinity
	I1226 22:57:40.275216    4740 command_runner.go:130] > LimitCORE=infinity
	I1226 22:57:40.275216    4740 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1226 22:57:40.275216    4740 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1226 22:57:40.275216    4740 command_runner.go:130] > TasksMax=infinity
	I1226 22:57:40.275216    4740 command_runner.go:130] > TimeoutStartSec=0
	I1226 22:57:40.275306    4740 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1226 22:57:40.275306    4740 command_runner.go:130] > Delegate=yes
	I1226 22:57:40.275306    4740 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1226 22:57:40.275306    4740 command_runner.go:130] > KillMode=process
	I1226 22:57:40.275306    4740 command_runner.go:130] > [Install]
	I1226 22:57:40.275306    4740 command_runner.go:130] > WantedBy=multi-user.target
	I1226 22:57:40.289350    4740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 22:57:40.320157    4740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 22:57:40.353715    4740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 22:57:40.385744    4740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1226 22:57:40.421976    4740 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1226 22:57:40.494079    4740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1226 22:57:40.516847    4740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 22:57:40.543836    4740 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1226 22:57:40.559404    4740 ssh_runner.go:195] Run: which cri-dockerd
	I1226 22:57:40.564605    4740 command_runner.go:130] > /usr/bin/cri-dockerd
	I1226 22:57:40.577588    4740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1226 22:57:40.593020    4740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1226 22:57:40.633606    4740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1226 22:57:40.800752    4740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1226 22:57:40.955279    4740 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1226 22:57:40.955521    4740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1226 22:57:40.996540    4740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 22:57:41.170815    4740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1226 22:57:42.748185    4740 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5773699s)
	I1226 22:57:42.761856    4740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1226 22:57:42.941603    4740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1226 22:57:43.116302    4740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1226 22:57:43.281922    4740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 22:57:43.452412    4740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1226 22:57:43.492690    4740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 22:57:43.658785    4740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1226 22:57:43.761432    4740 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1226 22:57:43.775142    4740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1226 22:57:43.784818    4740 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1226 22:57:43.784818    4740 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1226 22:57:43.784818    4740 command_runner.go:130] > Device: 16h/22d	Inode: 921         Links: 1
	I1226 22:57:43.784932    4740 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1226 22:57:43.784932    4740 command_runner.go:130] > Access: 2023-12-26 22:57:43.675122508 +0000
	I1226 22:57:43.784971    4740 command_runner.go:130] > Modify: 2023-12-26 22:57:43.675122508 +0000
	I1226 22:57:43.784971    4740 command_runner.go:130] > Change: 2023-12-26 22:57:43.680122508 +0000
	I1226 22:57:43.784971    4740 command_runner.go:130] >  Birth: -
	I1226 22:57:43.785118    4740 start.go:543] Will wait 60s for crictl version
	I1226 22:57:43.799519    4740 ssh_runner.go:195] Run: which crictl
	I1226 22:57:43.804097    4740 command_runner.go:130] > /usr/bin/crictl
	I1226 22:57:43.818102    4740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 22:57:43.888250    4740 command_runner.go:130] > Version:  0.1.0
	I1226 22:57:43.889276    4740 command_runner.go:130] > RuntimeName:  docker
	I1226 22:57:43.889312    4740 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1226 22:57:43.889312    4740 command_runner.go:130] > RuntimeApiVersion:  v1
	I1226 22:57:43.889400    4740 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1226 22:57:43.900076    4740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1226 22:57:43.933104    4740 command_runner.go:130] > 24.0.7
	I1226 22:57:43.944105    4740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1226 22:57:43.973898    4740 command_runner.go:130] > 24.0.7
	I1226 22:57:43.978885    4740 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1226 22:57:43.979901    4740 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1226 22:57:43.983887    4740 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1226 22:57:43.983887    4740 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1226 22:57:43.983887    4740 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1226 22:57:43.983887    4740 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4e:ec:d4 Flags:up|broadcast|multicast|running}
	I1226 22:57:43.986883    4740 ip.go:210] interface addr: fe80::1f69:6bdb:2000:8fcd/64
	I1226 22:57:43.986883    4740 ip.go:210] interface addr: 172.21.176.1/20
	I1226 22:57:43.998884    4740 ssh_runner.go:195] Run: grep 172.21.176.1	host.minikube.internal$ /etc/hosts
	I1226 22:57:44.004610    4740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.21.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 22:57:44.023344    4740 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 22:57:44.033629    4740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1226 22:57:44.056595    4740 docker.go:671] Got preloaded images: 
	I1226 22:57:44.056595    4740 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1226 22:57:44.073215    4740 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1226 22:57:44.089314    4740 command_runner.go:139] > {"Repositories":{}}
	I1226 22:57:44.100921    4740 ssh_runner.go:195] Run: which lz4
	I1226 22:57:44.106879    4740 command_runner.go:130] > /usr/bin/lz4
	I1226 22:57:44.106879    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1226 22:57:44.118830    4740 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1226 22:57:44.123469    4740 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1226 22:57:44.124515    4740 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1226 22:57:44.124697    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1226 22:57:46.958631    4740 docker.go:635] Took 2.851752 seconds to copy over tarball
	I1226 22:57:46.971711    4740 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1226 22:57:56.060049    4740 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.0874958s)
	I1226 22:57:56.060049    4740 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1226 22:57:56.137999    4740 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1226 22:57:56.155084    4740 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1226 22:57:56.155332    4740 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1226 22:57:56.199458    4740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 22:57:56.375017    4740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1226 22:57:59.161254    4740 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7856591s)
	I1226 22:57:59.171527    4740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1226 22:57:59.200531    4740 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1226 22:57:59.200629    4740 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1226 22:57:59.200697    4740 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1226 22:57:59.200697    4740 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1226 22:57:59.200697    4740 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1226 22:57:59.200697    4740 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1226 22:57:59.200697    4740 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1226 22:57:59.200697    4740 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 22:57:59.200825    4740 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1226 22:57:59.200884    4740 cache_images.go:84] Images are preloaded, skipping loading
	I1226 22:57:59.211351    4740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1226 22:57:59.249902    4740 command_runner.go:130] > cgroupfs
	I1226 22:57:59.250713    4740 cni.go:84] Creating CNI manager for ""
	I1226 22:57:59.250849    4740 cni.go:136] 1 nodes found, recommending kindnet
	I1226 22:57:59.250849    4740 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 22:57:59.251010    4740 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.21.184.4 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-455300 NodeName:multinode-455300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.21.184.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.21.184.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1226 22:57:59.251223    4740 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.21.184.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-455300"
	  kubeletExtraArgs:
	    node-ip: 172.21.184.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.21.184.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 22:57:59.251484    4740 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-455300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.21.184.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-455300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 22:57:59.263338    4740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1226 22:57:59.278911    4740 command_runner.go:130] > kubeadm
	I1226 22:57:59.278911    4740 command_runner.go:130] > kubectl
	I1226 22:57:59.279066    4740 command_runner.go:130] > kubelet
	I1226 22:57:59.279066    4740 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 22:57:59.292955    4740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1226 22:57:59.307899    4740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1226 22:57:59.333916    4740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1226 22:57:59.358639    4740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1226 22:57:59.400533    4740 ssh_runner.go:195] Run: grep 172.21.184.4	control-plane.minikube.internal$ /etc/hosts
	I1226 22:57:59.406607    4740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.21.184.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 22:57:59.424339    4740 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300 for IP: 172.21.184.4
	I1226 22:57:59.424339    4740 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:57:59.426389    4740 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1226 22:57:59.426815    4740 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1226 22:57:59.427491    4740 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\client.key
	I1226 22:57:59.427491    4740 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\client.crt with IP's: []
	I1226 22:57:59.621493    4740 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\client.crt ...
	I1226 22:57:59.621493    4740 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\client.crt: {Name:mk6656ca1c9cd1023738825d54d417757f0968fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:57:59.622495    4740 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\client.key ...
	I1226 22:57:59.622495    4740 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\client.key: {Name:mk83847ee29f127a27555d9725a2257629091b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:57:59.624527    4740 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key.426f0287
	I1226 22:57:59.624795    4740 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt.426f0287 with IP's: [172.21.184.4 10.96.0.1 127.0.0.1 10.0.0.1]
	I1226 22:57:59.802546    4740 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt.426f0287 ...
	I1226 22:57:59.802546    4740 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt.426f0287: {Name:mk6ab5d0e76d09e7641577a3904f5285617b0486 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:57:59.803967    4740 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key.426f0287 ...
	I1226 22:57:59.803967    4740 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key.426f0287: {Name:mk29702946da12c3dd1055cb59049f2d73803f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:57:59.804447    4740 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt.426f0287 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt
	I1226 22:57:59.816633    4740 certs.go:341] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key.426f0287 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key
	I1226 22:57:59.817535    4740 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.key
	I1226 22:57:59.817535    4740 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.crt with IP's: []
	I1226 22:57:59.918680    4740 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.crt ...
	I1226 22:57:59.918680    4740 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.crt: {Name:mkd78f9b3120d91c0d6e7f74e3363a6527ae8277 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:57:59.920707    4740 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.key ...
	I1226 22:57:59.920707    4740 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.key: {Name:mkff0c6fbeb9aed2eda96530e475d775e154d39f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:57:59.921329    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1226 22:57:59.922471    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1226 22:57:59.922681    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1226 22:57:59.930960    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1226 22:57:59.932061    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1226 22:57:59.932061    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1226 22:57:59.932061    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1226 22:57:59.932061    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1226 22:57:59.932724    4740 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem (1338 bytes)
	W1226 22:57:59.933365    4740 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728_empty.pem, impossibly tiny 0 bytes
	I1226 22:57:59.933365    4740 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1226 22:57:59.933365    4740 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1226 22:57:59.933365    4740 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1226 22:57:59.934303    4740 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1226 22:57:59.934303    4740 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem (1708 bytes)
	I1226 22:57:59.934303    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:57:59.935295    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem -> /usr/share/ca-certificates/10728.pem
	I1226 22:57:59.935295    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> /usr/share/ca-certificates/107282.pem
	I1226 22:57:59.937343    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1226 22:57:59.982135    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1226 22:58:00.023950    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1226 22:58:00.066982    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1226 22:58:00.108974    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 22:58:00.146898    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 22:58:00.190322    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 22:58:00.228981    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1226 22:58:00.264927    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 22:58:00.304174    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem --> /usr/share/ca-certificates/10728.pem (1338 bytes)
	I1226 22:58:00.345175    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /usr/share/ca-certificates/107282.pem (1708 bytes)
	I1226 22:58:00.382193    4740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1226 22:58:00.424193    4740 ssh_runner.go:195] Run: openssl version
	I1226 22:58:00.432306    4740 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1226 22:58:00.445490    4740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107282.pem && ln -fs /usr/share/ca-certificates/107282.pem /etc/ssl/certs/107282.pem"
	I1226 22:58:00.475691    4740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107282.pem
	I1226 22:58:00.482858    4740 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 26 22:04 /usr/share/ca-certificates/107282.pem
	I1226 22:58:00.482994    4740 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 26 22:04 /usr/share/ca-certificates/107282.pem
	I1226 22:58:00.497225    4740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107282.pem
	I1226 22:58:00.504884    4740 command_runner.go:130] > 3ec20f2e
	I1226 22:58:00.522453    4740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107282.pem /etc/ssl/certs/3ec20f2e.0"
	I1226 22:58:00.554364    4740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 22:58:00.583637    4740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:58:00.589992    4740 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 26 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:58:00.590147    4740 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:58:00.602950    4740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:58:00.610769    4740 command_runner.go:130] > b5213941
	I1226 22:58:00.626310    4740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 22:58:00.658374    4740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10728.pem && ln -fs /usr/share/ca-certificates/10728.pem /etc/ssl/certs/10728.pem"
	I1226 22:58:00.687344    4740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10728.pem
	I1226 22:58:00.694306    4740 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 26 22:04 /usr/share/ca-certificates/10728.pem
	I1226 22:58:00.694306    4740 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 26 22:04 /usr/share/ca-certificates/10728.pem
	I1226 22:58:00.705342    4740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10728.pem
	I1226 22:58:00.712316    4740 command_runner.go:130] > 51391683
	I1226 22:58:00.722732    4740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10728.pem /etc/ssl/certs/51391683.0"
	I1226 22:58:00.751866    4740 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 22:58:00.756932    4740 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 22:58:00.758018    4740 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 22:58:00.758512    4740 kubeadm.go:404] StartCluster: {Name:multinode-455300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-455300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.21.184.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 22:58:00.768654    4740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1226 22:58:00.808762    4740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1226 22:58:00.823975    4740 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1226 22:58:00.823975    4740 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1226 22:58:00.824550    4740 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1226 22:58:00.838240    4740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1226 22:58:00.866668    4740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1226 22:58:00.880403    4740 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1226 22:58:00.880536    4740 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1226 22:58:00.880536    4740 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1226 22:58:00.880536    4740 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 22:58:00.880536    4740 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 22:58:00.880536    4740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1226 22:58:01.722098    4740 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 22:58:01.722098    4740 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 22:58:16.662288    4740 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1226 22:58:16.662288    4740 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1226 22:58:16.662288    4740 kubeadm.go:322] [preflight] Running pre-flight checks
	I1226 22:58:16.662288    4740 command_runner.go:130] > [preflight] Running pre-flight checks
	I1226 22:58:16.662288    4740 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1226 22:58:16.662288    4740 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1226 22:58:16.662288    4740 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1226 22:58:16.662288    4740 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1226 22:58:16.662288    4740 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1226 22:58:16.662288    4740 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1226 22:58:16.662288    4740 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1226 22:58:16.662288    4740 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1226 22:58:16.666143    4740 out.go:204]   - Generating certificates and keys ...
	I1226 22:58:16.666382    4740 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1226 22:58:16.666382    4740 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1226 22:58:16.666586    4740 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1226 22:58:16.666586    4740 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1226 22:58:16.666689    4740 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1226 22:58:16.666689    4740 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1226 22:58:16.666837    4740 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1226 22:58:16.666837    4740 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1226 22:58:16.667010    4740 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1226 22:58:16.667100    4740 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1226 22:58:16.667182    4740 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1226 22:58:16.667272    4740 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1226 22:58:16.667353    4740 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1226 22:58:16.667353    4740 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1226 22:58:16.667691    4740 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-455300] and IPs [172.21.184.4 127.0.0.1 ::1]
	I1226 22:58:16.667787    4740 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-455300] and IPs [172.21.184.4 127.0.0.1 ::1]
	I1226 22:58:16.667899    4740 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1226 22:58:16.667899    4740 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1226 22:58:16.668247    4740 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-455300] and IPs [172.21.184.4 127.0.0.1 ::1]
	I1226 22:58:16.668247    4740 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-455300] and IPs [172.21.184.4 127.0.0.1 ::1]
	I1226 22:58:16.668534    4740 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1226 22:58:16.668534    4740 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1226 22:58:16.668660    4740 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1226 22:58:16.668660    4740 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1226 22:58:16.668798    4740 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1226 22:58:16.668861    4740 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1226 22:58:16.669025    4740 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1226 22:58:16.669025    4740 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1226 22:58:16.669112    4740 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1226 22:58:16.669195    4740 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1226 22:58:16.669277    4740 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1226 22:58:16.669277    4740 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1226 22:58:16.669473    4740 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1226 22:58:16.669500    4740 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1226 22:58:16.669660    4740 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1226 22:58:16.669660    4740 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1226 22:58:16.669813    4740 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1226 22:58:16.669914    4740 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1226 22:58:16.669914    4740 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1226 22:58:16.669914    4740 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1226 22:58:16.674207    4740 out.go:204]   - Booting up control plane ...
	I1226 22:58:16.674207    4740 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1226 22:58:16.674207    4740 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1226 22:58:16.674207    4740 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1226 22:58:16.674207    4740 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1226 22:58:16.674925    4740 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1226 22:58:16.674988    4740 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1226 22:58:16.675071    4740 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 22:58:16.675071    4740 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 22:58:16.675365    4740 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 22:58:16.675365    4740 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 22:58:16.675517    4740 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1226 22:58:16.675517    4740 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1226 22:58:16.675641    4740 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1226 22:58:16.675641    4740 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1226 22:58:16.675965    4740 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.006229 seconds
	I1226 22:58:16.675965    4740 command_runner.go:130] > [apiclient] All control plane components are healthy after 9.006229 seconds
	I1226 22:58:16.676100    4740 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1226 22:58:16.676100    4740 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1226 22:58:16.676100    4740 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1226 22:58:16.676100    4740 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1226 22:58:16.676100    4740 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1226 22:58:16.676100    4740 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1226 22:58:16.676948    4740 command_runner.go:130] > [mark-control-plane] Marking the node multinode-455300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1226 22:58:16.676948    4740 kubeadm.go:322] [mark-control-plane] Marking the node multinode-455300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1226 22:58:16.676948    4740 command_runner.go:130] > [bootstrap-token] Using token: vo63na.yczeik9v5sh8hhok
	I1226 22:58:16.676948    4740 kubeadm.go:322] [bootstrap-token] Using token: vo63na.yczeik9v5sh8hhok
	I1226 22:58:16.678924    4740 out.go:204]   - Configuring RBAC rules ...
	I1226 22:58:16.679989    4740 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1226 22:58:16.680172    4740 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1226 22:58:16.680341    4740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1226 22:58:16.680341    4740 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1226 22:58:16.680580    4740 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1226 22:58:16.680580    4740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1226 22:58:16.681033    4740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1226 22:58:16.681033    4740 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1226 22:58:16.681252    4740 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1226 22:58:16.681252    4740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1226 22:58:16.681532    4740 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1226 22:58:16.681532    4740 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1226 22:58:16.681706    4740 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1226 22:58:16.681772    4740 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1226 22:58:16.681772    4740 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1226 22:58:16.681772    4740 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1226 22:58:16.681772    4740 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1226 22:58:16.681772    4740 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1226 22:58:16.681772    4740 kubeadm.go:322] 
	I1226 22:58:16.681772    4740 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1226 22:58:16.681772    4740 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1226 22:58:16.681772    4740 kubeadm.go:322] 
	I1226 22:58:16.682402    4740 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1226 22:58:16.682455    4740 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1226 22:58:16.682455    4740 kubeadm.go:322] 
	I1226 22:58:16.682575    4740 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1226 22:58:16.682575    4740 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1226 22:58:16.682811    4740 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1226 22:58:16.682811    4740 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1226 22:58:16.682923    4740 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1226 22:58:16.682980    4740 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1226 22:58:16.683036    4740 kubeadm.go:322] 
	I1226 22:58:16.683151    4740 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1226 22:58:16.683208    4740 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1226 22:58:16.683208    4740 kubeadm.go:322] 
	I1226 22:58:16.683433    4740 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1226 22:58:16.683433    4740 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1226 22:58:16.683433    4740 kubeadm.go:322] 
	I1226 22:58:16.683557    4740 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1226 22:58:16.683611    4740 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1226 22:58:16.683841    4740 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1226 22:58:16.683841    4740 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1226 22:58:16.684101    4740 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1226 22:58:16.684101    4740 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1226 22:58:16.684101    4740 kubeadm.go:322] 
	I1226 22:58:16.684235    4740 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1226 22:58:16.684373    4740 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1226 22:58:16.684551    4740 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1226 22:58:16.684551    4740 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1226 22:58:16.684551    4740 kubeadm.go:322] 
	I1226 22:58:16.684779    4740 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token vo63na.yczeik9v5sh8hhok \
	I1226 22:58:16.684779    4740 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vo63na.yczeik9v5sh8hhok \
	I1226 22:58:16.685039    4740 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59aae3d8e090ab5d371ccfd9b23acdfdebb77e43326e89bce43646e9b1304925 \
	I1226 22:58:16.685039    4740 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:59aae3d8e090ab5d371ccfd9b23acdfdebb77e43326e89bce43646e9b1304925 \
	I1226 22:58:16.685039    4740 kubeadm.go:322] 	--control-plane 
	I1226 22:58:16.685200    4740 command_runner.go:130] > 	--control-plane 
	I1226 22:58:16.685200    4740 kubeadm.go:322] 
	I1226 22:58:16.685294    4740 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1226 22:58:16.685294    4740 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1226 22:58:16.685294    4740 kubeadm.go:322] 
	I1226 22:58:16.685294    4740 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token vo63na.yczeik9v5sh8hhok \
	I1226 22:58:16.685294    4740 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vo63na.yczeik9v5sh8hhok \
	I1226 22:58:16.685294    4740 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:59aae3d8e090ab5d371ccfd9b23acdfdebb77e43326e89bce43646e9b1304925 
	I1226 22:58:16.685871    4740 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59aae3d8e090ab5d371ccfd9b23acdfdebb77e43326e89bce43646e9b1304925 
	I1226 22:58:16.686012    4740 cni.go:84] Creating CNI manager for ""
	I1226 22:58:16.686012    4740 cni.go:136] 1 nodes found, recommending kindnet
	I1226 22:58:16.689905    4740 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1226 22:58:16.709743    4740 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1226 22:58:16.717737    4740 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1226 22:58:16.717805    4740 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1226 22:58:16.717805    4740 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1226 22:58:16.717805    4740 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 22:58:16.717805    4740 command_runner.go:130] > Access: 2023-12-26 22:56:26.529435000 +0000
	I1226 22:58:16.717805    4740 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I1226 22:58:16.717805    4740 command_runner.go:130] > Change: 2023-12-26 22:56:16.394000000 +0000
	I1226 22:58:16.717805    4740 command_runner.go:130] >  Birth: -
	I1226 22:58:16.717954    4740 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1226 22:58:16.718033    4740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1226 22:58:16.761349    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1226 22:58:18.480288    4740 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1226 22:58:18.480971    4740 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1226 22:58:18.480971    4740 command_runner.go:130] > serviceaccount/kindnet created
	I1226 22:58:18.480971    4740 command_runner.go:130] > daemonset.apps/kindnet created
	I1226 22:58:18.481289    4740 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.7199402s)
	I1226 22:58:18.481426    4740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1226 22:58:18.498443    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:18.498443    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b minikube.k8s.io/name=multinode-455300 minikube.k8s.io/updated_at=2023_12_26T22_58_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:18.513609    4740 command_runner.go:130] > -16
	I1226 22:58:18.513767    4740 ops.go:34] apiserver oom_adj: -16
	I1226 22:58:18.676392    4740 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1226 22:58:18.678765    4740 command_runner.go:130] > node/multinode-455300 labeled
	I1226 22:58:18.692417    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:18.824247    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:19.202340    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:19.329783    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:19.708577    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:19.847178    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:20.207829    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:20.344338    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:20.703573    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:20.834386    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:21.194641    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:21.309664    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:21.701319    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:21.823087    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:22.206976    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:22.316992    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:22.692212    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:22.814634    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:23.201219    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:23.316695    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:23.697789    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:23.810494    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:24.204588    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:24.326912    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:24.692228    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:24.813466    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:25.199574    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:25.321290    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:25.706592    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:25.860814    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:26.205392    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:26.353698    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:26.710687    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:26.827189    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:27.196180    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:27.338044    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:27.707396    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:27.848778    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:28.196539    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:28.314128    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:28.695993    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:28.868833    4740 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:58:29.204323    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:58:29.377983    4740 command_runner.go:130] > NAME      SECRETS   AGE
	I1226 22:58:29.378035    4740 command_runner.go:130] > default   0         0s
	I1226 22:58:29.378035    4740 kubeadm.go:1088] duration metric: took 10.8965574s to wait for elevateKubeSystemPrivileges.
	I1226 22:58:29.378165    4740 kubeadm.go:406] StartCluster complete in 28.6196563s
	I1226 22:58:29.378222    4740 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:58:29.378437    4740 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 22:58:29.379770    4740 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:58:29.381697    4740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1226 22:58:29.381933    4740 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1226 22:58:29.382062    4740 addons.go:69] Setting storage-provisioner=true in profile "multinode-455300"
	I1226 22:58:29.382126    4740 addons.go:237] Setting addon storage-provisioner=true in "multinode-455300"
	I1226 22:58:29.382189    4740 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 22:58:29.382126    4740 addons.go:69] Setting default-storageclass=true in profile "multinode-455300"
	I1226 22:58:29.382274    4740 host.go:66] Checking if "multinode-455300" exists ...
	I1226 22:58:29.382274    4740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-455300"
	I1226 22:58:29.383437    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:58:29.384152    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:58:29.397669    4740 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 22:58:29.398691    4740 kapi.go:59] client config for multinode-455300: &rest.Config{Host:"https://172.21.184.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:58:29.399670    4740 cert_rotation.go:137] Starting client certificate rotation controller
	I1226 22:58:29.400687    4740 round_trippers.go:463] GET https://172.21.184.4:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1226 22:58:29.400687    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:29.400687    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:29.400687    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:29.420981    4740 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1226 22:58:29.420981    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:29.420981    4740 round_trippers.go:580]     Audit-Id: cd8dc1ae-c15e-4289-97b1-37805e8fd651
	I1226 22:58:29.420981    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:29.420981    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:29.420981    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:29.420981    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:29.420981    4740 round_trippers.go:580]     Content-Length: 291
	I1226 22:58:29.420981    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:29 GMT
	I1226 22:58:29.420981    4740 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d040dd96-d104-4852-b930-38d82a1c4e71","resourceVersion":"265","creationTimestamp":"2023-12-26T22:58:16Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1226 22:58:29.420981    4740 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d040dd96-d104-4852-b930-38d82a1c4e71","resourceVersion":"265","creationTimestamp":"2023-12-26T22:58:16Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1226 22:58:29.421964    4740 round_trippers.go:463] PUT https://172.21.184.4:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1226 22:58:29.421964    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:29.421964    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:29.421964    4740 round_trippers.go:473]     Content-Type: application/json
	I1226 22:58:29.421964    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:29.431968    4740 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1226 22:58:29.431968    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:29.431968    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:29.431968    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:29.431968    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:29.431968    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:29.431968    4740 round_trippers.go:580]     Content-Length: 291
	I1226 22:58:29.431968    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:29 GMT
	I1226 22:58:29.431968    4740 round_trippers.go:580]     Audit-Id: aa2706f3-28db-45bf-bf72-31dcc888b280
	I1226 22:58:29.431968    4740 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d040dd96-d104-4852-b930-38d82a1c4e71","resourceVersion":"355","creationTimestamp":"2023-12-26T22:58:16Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1226 22:58:29.768018    4740 command_runner.go:130] > apiVersion: v1
	I1226 22:58:29.768368    4740 command_runner.go:130] > data:
	I1226 22:58:29.768368    4740 command_runner.go:130] >   Corefile: |
	I1226 22:58:29.768466    4740 command_runner.go:130] >     .:53 {
	I1226 22:58:29.768466    4740 command_runner.go:130] >         errors
	I1226 22:58:29.768466    4740 command_runner.go:130] >         health {
	I1226 22:58:29.768466    4740 command_runner.go:130] >            lameduck 5s
	I1226 22:58:29.768466    4740 command_runner.go:130] >         }
	I1226 22:58:29.768466    4740 command_runner.go:130] >         ready
	I1226 22:58:29.768466    4740 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1226 22:58:29.768563    4740 command_runner.go:130] >            pods insecure
	I1226 22:58:29.768610    4740 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1226 22:58:29.768610    4740 command_runner.go:130] >            ttl 30
	I1226 22:58:29.768610    4740 command_runner.go:130] >         }
	I1226 22:58:29.768610    4740 command_runner.go:130] >         prometheus :9153
	I1226 22:58:29.768664    4740 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1226 22:58:29.768664    4740 command_runner.go:130] >            max_concurrent 1000
	I1226 22:58:29.768664    4740 command_runner.go:130] >         }
	I1226 22:58:29.768692    4740 command_runner.go:130] >         cache 30
	I1226 22:58:29.768692    4740 command_runner.go:130] >         loop
	I1226 22:58:29.768737    4740 command_runner.go:130] >         reload
	I1226 22:58:29.768737    4740 command_runner.go:130] >         loadbalance
	I1226 22:58:29.768737    4740 command_runner.go:130] >     }
	I1226 22:58:29.768737    4740 command_runner.go:130] > kind: ConfigMap
	I1226 22:58:29.768737    4740 command_runner.go:130] > metadata:
	I1226 22:58:29.768737    4740 command_runner.go:130] >   creationTimestamp: "2023-12-26T22:58:16Z"
	I1226 22:58:29.768737    4740 command_runner.go:130] >   name: coredns
	I1226 22:58:29.768737    4740 command_runner.go:130] >   namespace: kube-system
	I1226 22:58:29.768737    4740 command_runner.go:130] >   resourceVersion: "261"
	I1226 22:58:29.768834    4740 command_runner.go:130] >   uid: d1f0a471-f150-4768-9d56-de6f75812b72
	I1226 22:58:29.771339    4740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.21.176.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1226 22:58:29.912574    4740 round_trippers.go:463] GET https://172.21.184.4:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1226 22:58:29.912669    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:29.912669    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:29.912669    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:29.929550    4740 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1226 22:58:29.930137    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:29.930137    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:29.930137    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:29.930137    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:29.930252    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:29.930252    4740 round_trippers.go:580]     Content-Length: 291
	I1226 22:58:29.930252    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:29 GMT
	I1226 22:58:29.930337    4740 round_trippers.go:580]     Audit-Id: 1ec35489-9749-41e5-8412-6d753a529221
	I1226 22:58:29.930422    4740 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d040dd96-d104-4852-b930-38d82a1c4e71","resourceVersion":"387","creationTimestamp":"2023-12-26T22:58:16Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1226 22:58:29.930666    4740 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-455300" context rescaled to 1 replicas
	I1226 22:58:29.930733    4740 start.go:223] Will wait 6m0s for node &{Name: IP:172.21.184.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1226 22:58:29.935399    4740 out.go:177] * Verifying Kubernetes components...
	I1226 22:58:29.951102    4740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:58:30.634419    4740 command_runner.go:130] > configmap/coredns replaced
	I1226 22:58:30.639772    4740 start.go:929] {"host.minikube.internal": 172.21.176.1} host record injected into CoreDNS's ConfigMap
	I1226 22:58:30.640791    4740 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 22:58:30.642046    4740 kapi.go:59] client config for multinode-455300: &rest.Config{Host:"https://172.21.184.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:58:30.643750    4740 node_ready.go:35] waiting up to 6m0s for node "multinode-455300" to be "Ready" ...
	I1226 22:58:30.644051    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:30.644051    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:30.644177    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:30.644177    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:30.648638    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:58:30.648638    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:30.648638    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:30.648638    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:30 GMT
	I1226 22:58:30.648638    4740 round_trippers.go:580]     Audit-Id: 9c043525-5525-4aba-a65e-b600399db4b7
	I1226 22:58:30.648638    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:30.648638    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:30.648638    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:30.649624    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:31.151595    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:31.151663    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:31.151663    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:31.151663    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:31.163390    4740 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1226 22:58:31.163753    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:31.163753    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:31.163753    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:31.163753    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:31 GMT
	I1226 22:58:31.163753    4740 round_trippers.go:580]     Audit-Id: 5186e88d-ebbf-4b28-84ac-d3d969701056
	I1226 22:58:31.163753    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:31.163880    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:31.164419    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:31.658526    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:31.658526    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:31.658526    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:31.658526    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:31.664358    4740 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 22:58:31.664417    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:31.664491    4740 round_trippers.go:580]     Audit-Id: 3cdb65aa-235a-4670-9210-25979c4be78a
	I1226 22:58:31.664491    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:31.664491    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:31.664550    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:31.664591    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:31.664636    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:31 GMT
	I1226 22:58:31.665118    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:31.689197    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:58:31.689197    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:58:31.689197    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:58:31.689324    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:58:31.692633    4740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 22:58:31.690150    4740 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 22:58:31.695169    4740 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 22:58:31.695169    4740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1226 22:58:31.695169    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:58:31.695729    4740 kapi.go:59] client config for multinode-455300: &rest.Config{Host:"https://172.21.184.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:58:31.696507    4740 addons.go:237] Setting addon default-storageclass=true in "multinode-455300"
	I1226 22:58:31.696667    4740 host.go:66] Checking if "multinode-455300" exists ...
	I1226 22:58:31.697484    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:58:32.149220    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:32.149282    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:32.149282    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:32.149282    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:32.153564    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:58:32.153564    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:32.153564    4740 round_trippers.go:580]     Audit-Id: 4d19d362-867c-4acd-a056-be196297dd13
	I1226 22:58:32.153564    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:32.153564    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:32.153564    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:32.153564    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:32.153564    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:32 GMT
	I1226 22:58:32.154949    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:32.658348    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:32.658547    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:32.658547    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:32.658547    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:32.663795    4740 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 22:58:32.663795    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:32.663795    4740 round_trippers.go:580]     Audit-Id: 722362fc-3306-48b8-b538-d5d5f597aea3
	I1226 22:58:32.663795    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:32.663795    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:32.663795    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:32.663795    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:32.663795    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:32 GMT
	I1226 22:58:32.664783    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:32.665394    4740 node_ready.go:58] node "multinode-455300" has status "Ready":"False"
	I1226 22:58:33.151223    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:33.151325    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:33.151325    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:33.151432    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:33.155774    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:58:33.155774    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:33.156264    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:33.156264    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:33.156264    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:33 GMT
	I1226 22:58:33.156264    4740 round_trippers.go:580]     Audit-Id: 6faa0164-9cb5-4e7d-9ddd-626bfc427968
	I1226 22:58:33.156429    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:33.156429    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:33.156848    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:33.659024    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:33.659115    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:33.659115    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:33.659115    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:33.663534    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:58:33.663534    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:33.664031    4740 round_trippers.go:580]     Audit-Id: e54ce8fb-4e94-45ee-9856-dd9b87dc3126
	I1226 22:58:33.664031    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:33.664031    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:33.664031    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:33.664031    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:33.664031    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:33 GMT
	I1226 22:58:33.666450    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:33.924237    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:58:33.925001    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:58:33.925001    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:58:33.942272    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:58:33.942381    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:58:33.942646    4740 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1226 22:58:33.942690    4740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1226 22:58:33.942755    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 22:58:34.147652    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:34.147745    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:34.147745    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:34.147745    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:34.151087    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:58:34.151087    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:34.151087    4740 round_trippers.go:580]     Audit-Id: 78c6ea55-186a-466a-a4af-00ba7106d4f5
	I1226 22:58:34.151087    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:34.151845    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:34.151845    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:34.151845    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:34.151920    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:34 GMT
	I1226 22:58:34.152326    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:34.658315    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:34.658406    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:34.658406    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:34.658406    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:34.661772    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:58:34.662673    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:34.662673    4740 round_trippers.go:580]     Audit-Id: 85be788c-795e-458c-936c-be1b377f2b4d
	I1226 22:58:34.662673    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:34.662673    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:34.662673    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:34.662673    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:34.662673    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:34 GMT
	I1226 22:58:34.663168    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:35.153170    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:35.153170    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:35.153170    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:35.153278    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:35.158597    4740 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 22:58:35.158824    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:35.158824    4740 round_trippers.go:580]     Audit-Id: 1ec1a8d4-5450-447e-9a22-2b4296f137b7
	I1226 22:58:35.158824    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:35.158922    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:35.158922    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:35.158922    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:35.158922    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:35 GMT
	I1226 22:58:35.159196    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:35.159648    4740 node_ready.go:58] node "multinode-455300" has status "Ready":"False"
	I1226 22:58:35.660260    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:35.660365    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:35.660365    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:35.660365    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:35.664720    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:58:35.664720    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:35.664720    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:35.664720    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:35.664720    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:35.664720    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:35 GMT
	I1226 22:58:35.664720    4740 round_trippers.go:580]     Audit-Id: 5c9f99cd-7f28-4941-977a-6b5f3bdd99dc
	I1226 22:58:35.665128    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:35.665379    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:36.151007    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:36.151099    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:36.151099    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:36.151099    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:36.159837    4740 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1226 22:58:36.159837    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:36.159837    4740 round_trippers.go:580]     Audit-Id: 5a9cb5b0-4fa8-412d-8330-2f82a1a4522b
	I1226 22:58:36.159837    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:36.159837    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:36.159837    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:36.159837    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:36.159837    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:36 GMT
	I1226 22:58:36.161054    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:36.197489    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:58:36.197489    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:58:36.197723    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 22:58:36.658572    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:36.658663    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:36.658663    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:36.658751    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:36.666695    4740 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 22:58:36.666747    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:36.666747    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:36.666747    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:36.666846    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:36 GMT
	I1226 22:58:36.666846    4740 round_trippers.go:580]     Audit-Id: e48c3d45-2fba-4e60-bf60-c486ec1eb178
	I1226 22:58:36.666846    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:36.666902    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:36.667041    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:36.768743    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:58:36.769009    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:58:36.769198    4740 sshutil.go:53] new ssh client: &{IP:172.21.184.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 22:58:36.991648    4740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 22:58:37.146296    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:37.146367    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:37.146367    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:37.146367    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:37.149550    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:58:37.149855    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:37.149879    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:37.149879    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:37.149879    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:37.149879    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:37.149879    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:37 GMT
	I1226 22:58:37.149879    4740 round_trippers.go:580]     Audit-Id: 2b8f9b0f-b674-4ad2-bda4-bbb8717a28fa
	I1226 22:58:37.150378    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:37.643825    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:37.643825    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:37.643825    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:37.643825    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:37.648966    4740 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 22:58:37.649437    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:37.649437    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:37.649437    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:37.649437    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:37 GMT
	I1226 22:58:37.649437    4740 round_trippers.go:580]     Audit-Id: b838e0a7-93c0-401d-bf19-0708641950fc
	I1226 22:58:37.649625    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:37.649754    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:37.650288    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:37.650971    4740 node_ready.go:58] node "multinode-455300" has status "Ready":"False"
	I1226 22:58:38.111652    4740 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1226 22:58:38.112519    4740 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1226 22:58:38.112611    4740 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1226 22:58:38.112611    4740 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1226 22:58:38.112673    4740 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1226 22:58:38.112673    4740 command_runner.go:130] > pod/storage-provisioner created
	I1226 22:58:38.112702    4740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1210537s)
	I1226 22:58:38.148306    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:38.148507    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:38.148507    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:38.148507    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:38.151661    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:58:38.151661    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:38.151661    4740 round_trippers.go:580]     Audit-Id: 09012dea-32de-4ce2-95a3-125de0dbe5d6
	I1226 22:58:38.152217    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:38.152217    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:38.152217    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:38.152217    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:38.152217    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:38 GMT
	I1226 22:58:38.152689    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:38.656040    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:38.656147    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:38.656147    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:38.656147    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:38.663386    4740 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 22:58:38.663386    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:38.663386    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:38.663386    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:38.663386    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:38.663386    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:38.663386    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:38 GMT
	I1226 22:58:38.663386    4740 round_trippers.go:580]     Audit-Id: 2a532acd-9903-4d81-849f-d0a6667780f1
	I1226 22:58:38.663992    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:38.862500    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 22:58:38.862500    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:58:38.863099    4740 sshutil.go:53] new ssh client: &{IP:172.21.184.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 22:58:39.002846    4740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1226 22:58:39.148834    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:39.148834    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:39.148834    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:39.148834    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:39.158845    4740 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1226 22:58:39.158845    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:39.158845    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:39.158845    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:39.158845    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:39 GMT
	I1226 22:58:39.158845    4740 round_trippers.go:580]     Audit-Id: a48d2f42-191d-40f6-a995-bb8f3c6dabb6
	I1226 22:58:39.158845    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:39.158845    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:39.159840    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:39.386744    4740 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1226 22:58:39.387359    4740 round_trippers.go:463] GET https://172.21.184.4:8443/apis/storage.k8s.io/v1/storageclasses
	I1226 22:58:39.387397    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:39.387397    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:39.387397    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:39.396307    4740 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1226 22:58:39.396307    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:39.396307    4740 round_trippers.go:580]     Content-Length: 1273
	I1226 22:58:39.396307    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:39 GMT
	I1226 22:58:39.396307    4740 round_trippers.go:580]     Audit-Id: 4bee9cc9-56fa-4ef0-bc83-07e3c24a04d8
	I1226 22:58:39.396745    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:39.396745    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:39.396745    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:39.396745    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:39.396804    4740 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"423"},"items":[{"metadata":{"name":"standard","uid":"dcb76261-fcdf-464c-94c8-7f96ca82e41b","resourceVersion":"423","creationTimestamp":"2023-12-26T22:58:39Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-26T22:58:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1226 22:58:39.397639    4740 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"dcb76261-fcdf-464c-94c8-7f96ca82e41b","resourceVersion":"423","creationTimestamp":"2023-12-26T22:58:39Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-26T22:58:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1226 22:58:39.397720    4740 round_trippers.go:463] PUT https://172.21.184.4:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1226 22:58:39.397776    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:39.397776    4740 round_trippers.go:473]     Content-Type: application/json
	I1226 22:58:39.397776    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:39.397776    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:39.401508    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:58:39.401549    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:39.401549    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:39.401549    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:39.401595    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:39.401595    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:39.401595    4740 round_trippers.go:580]     Content-Length: 1220
	I1226 22:58:39.401595    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:39 GMT
	I1226 22:58:39.401664    4740 round_trippers.go:580]     Audit-Id: 12b9a2e7-96cc-4ac7-bfff-1f5832a89917
	I1226 22:58:39.401730    4740 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"dcb76261-fcdf-464c-94c8-7f96ca82e41b","resourceVersion":"423","creationTimestamp":"2023-12-26T22:58:39Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-26T22:58:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1226 22:58:39.405863    4740 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1226 22:58:39.408025    4740 addons.go:508] enable addons completed in 10.0260932s: enabled=[storage-provisioner default-storageclass]
	I1226 22:58:39.655275    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:39.655345    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:39.655345    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:39.655345    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:39.658856    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:58:39.658856    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:39.658856    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:39.658856    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:39 GMT
	I1226 22:58:39.658856    4740 round_trippers.go:580]     Audit-Id: 9f1e9691-2f79-473d-aa04-05db10b0a753
	I1226 22:58:39.658856    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:39.658856    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:39.658856    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:39.658856    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:39.658856    4740 node_ready.go:58] node "multinode-455300" has status "Ready":"False"
	I1226 22:58:40.149494    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:40.149606    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:40.149606    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:40.149657    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:40.157672    4740 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1226 22:58:40.157672    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:40.157672    4740 round_trippers.go:580]     Audit-Id: 76ca42ba-9c0a-489f-8755-a9e4b2a00909
	I1226 22:58:40.157672    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:40.157672    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:40.157672    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:40.157672    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:40.157672    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:40 GMT
	I1226 22:58:40.158397    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:40.657550    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:40.657754    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:40.657754    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:40.657754    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:40.664324    4740 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 22:58:40.664324    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:40.664324    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:40.664324    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:40 GMT
	I1226 22:58:40.664324    4740 round_trippers.go:580]     Audit-Id: 8ae48ce9-ab16-4fec-b1d4-0bf4a6da602c
	I1226 22:58:40.664324    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:40.664324    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:40.664324    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:40.665233    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:41.158303    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:41.158411    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:41.158411    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:41.158487    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:41.357499    4740 round_trippers.go:574] Response Status: 200 OK in 198 milliseconds
	I1226 22:58:41.357499    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:41.357499    4740 round_trippers.go:580]     Audit-Id: b79f1a37-c32c-429b-a0c6-3a9a2d361b4a
	I1226 22:58:41.357499    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:41.357499    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:41.357499    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:41.357499    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:41.357499    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:41 GMT
	I1226 22:58:41.357824    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:41.659568    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:41.659568    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:41.659568    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:41.659568    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:41.663152    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:58:41.663152    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:41.663152    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:41 GMT
	I1226 22:58:41.663152    4740 round_trippers.go:580]     Audit-Id: 768efc66-3163-4793-8a69-3fdee95885ae
	I1226 22:58:41.663152    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:41.663152    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:41.663152    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:41.663598    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:41.663970    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:41.664563    4740 node_ready.go:58] node "multinode-455300" has status "Ready":"False"
	I1226 22:58:42.146273    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:42.146373    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:42.146373    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:42.146373    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:42.149995    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:58:42.149995    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:42.149995    4740 round_trippers.go:580]     Audit-Id: be798331-b370-4aad-ae39-3006dea50ca1
	I1226 22:58:42.150407    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:42.150407    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:42.150407    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:42.150407    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:42.150407    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:42 GMT
	I1226 22:58:42.150635    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:42.647207    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:42.647509    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:42.647777    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:42.647820    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:42.652076    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:58:42.652076    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:42.652076    4740 round_trippers.go:580]     Audit-Id: 938cdaa2-afb0-4420-b83a-e415a9a9cfae
	I1226 22:58:42.652076    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:42.652076    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:42.652076    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:42.652076    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:42.652446    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:42 GMT
	I1226 22:58:42.652704    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:43.149837    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:43.149951    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:43.149951    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:43.149951    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:43.154838    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:58:43.154838    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:43.154838    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:43.154838    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:43 GMT
	I1226 22:58:43.155246    4740 round_trippers.go:580]     Audit-Id: 3b4c94f4-c026-4b6b-9b0b-69d7eaba725c
	I1226 22:58:43.155246    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:43.155246    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:43.155246    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:43.155464    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:43.651645    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:43.651645    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:43.651645    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:43.651645    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:43.656232    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:58:43.656232    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:43.656232    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:43.656232    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:43.656232    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:43 GMT
	I1226 22:58:43.656232    4740 round_trippers.go:580]     Audit-Id: bdaffa32-8359-40c4-84b9-a5f25148fade
	I1226 22:58:43.656232    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:43.656232    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:43.656232    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"348","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1226 22:58:44.153170    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:44.153170    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:44.153170    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:44.153170    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:44.157794    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:58:44.157794    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:44.157794    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:44 GMT
	I1226 22:58:44.157794    4740 round_trippers.go:580]     Audit-Id: d0051f87-2e26-4402-8365-982f0e389c10
	I1226 22:58:44.158210    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:44.158210    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:44.158210    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:44.158210    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:44.159290    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"432","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1226 22:58:44.159815    4740 node_ready.go:49] node "multinode-455300" has status "Ready":"True"
	I1226 22:58:44.159871    4740 node_ready.go:38] duration metric: took 13.5160079s waiting for node "multinode-455300" to be "Ready" ...
	I1226 22:58:44.159871    4740 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:58:44.159976    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods
	I1226 22:58:44.160133    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:44.160164    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:44.160164    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:44.164429    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:58:44.164429    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:44.164429    4740 round_trippers.go:580]     Audit-Id: 91b8a807-57fc-47e0-a9df-28c231d03a67
	I1226 22:58:44.165230    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:44.165230    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:44.165230    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:44.165230    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:44.165230    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:44 GMT
	I1226 22:58:44.167056    4740 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"436","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53932 chars]
	I1226 22:58:44.172260    4740 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace to be "Ready" ...
	I1226 22:58:44.172260    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 22:58:44.172260    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:44.172260    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:44.172260    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:44.176876    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:58:44.176876    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:44.176876    4740 round_trippers.go:580]     Audit-Id: 36232607-5d50-4171-97fd-f9f2a3750cd9
	I1226 22:58:44.176876    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:44.176876    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:44.176876    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:44.176876    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:44.176876    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:44 GMT
	I1226 22:58:44.177999    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"436","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1226 22:58:44.178546    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:44.178610    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:44.178610    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:44.178610    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:44.185816    4740 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 22:58:44.186825    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:44.186825    4740 round_trippers.go:580]     Audit-Id: 2cf505bb-ab31-4f17-8262-bec74047d8de
	I1226 22:58:44.186825    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:44.186825    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:44.186825    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:44.186825    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:44.186825    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:44 GMT
	I1226 22:58:44.195148    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"432","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1226 22:58:44.687613    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 22:58:44.687613    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:44.687613    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:44.687613    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:44.692286    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:58:44.692286    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:44.692286    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:44 GMT
	I1226 22:58:44.692286    4740 round_trippers.go:580]     Audit-Id: e1d86881-88b5-49c4-a990-7eb223094159
	I1226 22:58:44.692286    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:44.692718    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:44.692718    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:44.692718    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:44.693057    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"436","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1226 22:58:44.693825    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:44.693825    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:44.693959    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:44.693959    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:44.697207    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:58:44.697207    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:44.697207    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:44.697207    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:44.697207    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:44.697207    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:44.697207    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:44 GMT
	I1226 22:58:44.697207    4740 round_trippers.go:580]     Audit-Id: d2154655-aba1-4cf8-bfdc-a7e6e6f1acda
	I1226 22:58:44.697827    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"432","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1226 22:58:45.179193    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 22:58:45.179250    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:45.179299    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:45.179299    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:45.183722    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:58:45.183722    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:45.183722    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:45.183722    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:45 GMT
	I1226 22:58:45.183722    4740 round_trippers.go:580]     Audit-Id: 9b0d6d1f-4ead-4873-ba1a-cdfd3b8deadb
	I1226 22:58:45.183722    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:45.183722    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:45.183722    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:45.184633    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"436","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1226 22:58:45.185583    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:45.185635    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:45.185689    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:45.185689    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:45.192701    4740 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 22:58:45.192701    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:45.192701    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:45.192701    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:45.192701    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:45.192701    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:45 GMT
	I1226 22:58:45.192701    4740 round_trippers.go:580]     Audit-Id: b0380163-6b46-4df7-9fb3-02a41686d566
	I1226 22:58:45.192701    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:45.192701    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"432","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1226 22:58:45.673061    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 22:58:45.673143    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:45.673143    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:45.673143    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:45.682740    4740 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1226 22:58:45.682740    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:45.682740    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:45 GMT
	I1226 22:58:45.682740    4740 round_trippers.go:580]     Audit-Id: 8d0c623f-ca8c-49b6-8422-9c725088f57c
	I1226 22:58:45.682740    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:45.682740    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:45.682740    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:45.682740    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:45.682740    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"436","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1226 22:58:45.683735    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:45.683735    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:45.683735    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:45.683735    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:45.687742    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:58:45.687742    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:45.688296    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:45.688296    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:45.688296    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:45 GMT
	I1226 22:58:45.688296    4740 round_trippers.go:580]     Audit-Id: b85b11a1-0d06-4684-b776-c929d8598b7d
	I1226 22:58:45.688296    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:45.688296    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:45.688771    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"432","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1226 22:58:46.188034    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 22:58:46.188034    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:46.188034    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:46.188034    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:46.196128    4740 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 22:58:46.196128    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:46.196128    4740 round_trippers.go:580]     Audit-Id: afcce53b-ab41-4cac-9d87-06efec14495f
	I1226 22:58:46.196128    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:46.196128    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:46.196263    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:46.196263    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:46.196263    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:46 GMT
	I1226 22:58:46.196335    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"436","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1226 22:58:46.197473    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:46.197525    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:46.197525    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:46.197525    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:46.199752    4740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:58:46.199752    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:46.199752    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:46 GMT
	I1226 22:58:46.199752    4740 round_trippers.go:580]     Audit-Id: b3e390c8-55db-469f-bc68-37bb9d6e59a8
	I1226 22:58:46.199752    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:46.199752    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:46.200800    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:46.200800    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:46.200859    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"432","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1226 22:58:46.200859    4740 pod_ready.go:102] pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace has status "Ready":"False"
	I1226 22:58:46.676165    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 22:58:46.676250    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:46.676250    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:46.676250    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:46.683604    4740 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 22:58:46.683604    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:46.683604    4740 round_trippers.go:580]     Audit-Id: 7c3406b0-eda0-4360-84de-4ff71460e1cd
	I1226 22:58:46.683604    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:46.683604    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:46.683987    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:46.683987    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:46.683987    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:46 GMT
	I1226 22:58:46.684402    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"451","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
	I1226 22:58:46.684630    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:46.684630    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:46.684630    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:46.684630    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:46.687859    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:58:46.687859    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:46.687859    4740 round_trippers.go:580]     Audit-Id: 6bb93481-5a30-4dad-adfa-b09065e5857f
	I1226 22:58:46.687859    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:46.687859    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:46.687859    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:46.687859    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:46.687859    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:46 GMT
	I1226 22:58:46.688818    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"432","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1226 22:58:46.688818    4740 pod_ready.go:92] pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace has status "Ready":"True"
	I1226 22:58:46.688818    4740 pod_ready.go:81] duration metric: took 2.5165574s waiting for pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace to be "Ready" ...
	I1226 22:58:46.688818    4740 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 22:58:46.688818    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-455300
	I1226 22:58:46.688818    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:46.688818    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:46.688818    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:46.693020    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:58:46.693020    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:46.693020    4740 round_trippers.go:580]     Audit-Id: f6d18f5f-8ff8-431f-8982-2f342fa0807d
	I1226 22:58:46.693020    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:46.693020    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:46.693446    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:46.693446    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:46.693446    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:46 GMT
	I1226 22:58:46.693536    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-455300","namespace":"kube-system","uid":"74a3baac-66f8-4934-bdb2-a8a34de26d03","resourceVersion":"412","creationTimestamp":"2023-12-26T22:58:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.21.184.4:2379","kubernetes.io/config.hash":"c67437441a51739d7438424fd3960b56","kubernetes.io/config.mirror":"c67437441a51739d7438424fd3960b56","kubernetes.io/config.seen":"2023-12-26T22:58:06.456133965Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5852 chars]
	I1226 22:58:46.693536    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:46.693536    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:46.693536    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:46.693536    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:46.696523    4740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:58:46.696523    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:46.696523    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:46 GMT
	I1226 22:58:46.696523    4740 round_trippers.go:580]     Audit-Id: 8d656f04-3fd7-4573-aa84-4c0a4383fd33
	I1226 22:58:46.696523    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:46.696523    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:46.696523    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:46.696523    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:46.697580    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"432","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1226 22:58:46.697580    4740 pod_ready.go:92] pod "etcd-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 22:58:46.697580    4740 pod_ready.go:81] duration metric: took 8.7629ms waiting for pod "etcd-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 22:58:46.697580    4740 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 22:58:46.697580    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-455300
	I1226 22:58:46.697580    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:46.697580    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:46.697580    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:46.701934    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:58:46.701934    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:46.701934    4740 round_trippers.go:580]     Audit-Id: a91d4082-6c5f-461f-a3c6-970c0e6112a4
	I1226 22:58:46.701934    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:46.701934    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:46.701934    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:46.702807    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:46.702807    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:46 GMT
	I1226 22:58:46.703209    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-455300","namespace":"kube-system","uid":"001f1489-e4c6-4a35-9c04-992ddd0eea29","resourceVersion":"413","creationTimestamp":"2023-12-26T22:58:17Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.21.184.4:8443","kubernetes.io/config.hash":"f2597de8fcd5ba36e5afbfdfbed4b155","kubernetes.io/config.mirror":"f2597de8fcd5ba36e5afbfdfbed4b155","kubernetes.io/config.seen":"2023-12-26T22:58:16.785839510Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7390 chars]
	I1226 22:58:46.703860    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:46.703894    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:46.703894    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:46.703922    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:46.706310    4740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:58:46.706310    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:46.706310    4740 round_trippers.go:580]     Audit-Id: f1fbf7de-7883-42ea-bb94-c07ff13434c3
	I1226 22:58:46.706915    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:46.706915    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:46.706915    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:46.706915    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:46.706915    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:46 GMT
	I1226 22:58:46.707012    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"432","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1226 22:58:46.707752    4740 pod_ready.go:92] pod "kube-apiserver-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 22:58:46.707752    4740 pod_ready.go:81] duration metric: took 10.1717ms waiting for pod "kube-apiserver-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 22:58:46.707752    4740 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 22:58:46.707752    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-455300
	I1226 22:58:46.708315    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:46.708377    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:46.708377    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:46.711172    4740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:58:46.711172    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:46.711172    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:46.711609    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:46.711609    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:46.711609    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:46 GMT
	I1226 22:58:46.711609    4740 round_trippers.go:580]     Audit-Id: 921dec5d-1a21-4342-91b3-adc1e69a576f
	I1226 22:58:46.711609    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:46.711693    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-455300","namespace":"kube-system","uid":"fdaf236b-e792-4278-908c-34b337b97beb","resourceVersion":"410","creationTimestamp":"2023-12-26T22:58:13Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"390e7b7338e932006306396347e13bcf","kubernetes.io/config.mirror":"390e7b7338e932006306396347e13bcf","kubernetes.io/config.seen":"2023-12-26T22:58:06.456140564Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6965 chars]
	I1226 22:58:46.712230    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:46.712230    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:46.712230    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:46.712230    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:46.717949    4740 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 22:58:46.717949    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:46.717949    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:46.717949    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:46 GMT
	I1226 22:58:46.717949    4740 round_trippers.go:580]     Audit-Id: 70945683-e6b6-4c4c-b9bc-f1b2a175beab
	I1226 22:58:46.717949    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:46.717949    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:46.717949    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:46.718672    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"432","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1226 22:58:46.718757    4740 pod_ready.go:92] pod "kube-controller-manager-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 22:58:46.718757    4740 pod_ready.go:81] duration metric: took 11.0049ms waiting for pod "kube-controller-manager-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 22:58:46.718757    4740 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hzcqb" in "kube-system" namespace to be "Ready" ...
	I1226 22:58:46.719336    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzcqb
	I1226 22:58:46.719336    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:46.719336    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:46.719336    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:46.723282    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:58:46.723282    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:46.723282    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:46.723282    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:46.723282    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:46.723282    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:46.723552    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:46 GMT
	I1226 22:58:46.723552    4740 round_trippers.go:580]     Audit-Id: b460030c-d1f5-411e-8adb-4fba16788ae9
	I1226 22:58:46.724054    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hzcqb","generateName":"kube-proxy-","namespace":"kube-system","uid":"0027fd42-fa64-4d1d-acc8-36e7b41e4838","resourceVersion":"408","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I1226 22:58:46.724458    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:46.724458    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:46.724458    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:46.724458    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:46.727353    4740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:58:46.727353    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:46.727353    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:46.727353    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:46.727353    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:46.727353    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:46 GMT
	I1226 22:58:46.727353    4740 round_trippers.go:580]     Audit-Id: fa50253d-0daa-4607-8d78-0878be317b7b
	I1226 22:58:46.727353    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:46.728148    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"432","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1226 22:58:46.728355    4740 pod_ready.go:92] pod "kube-proxy-hzcqb" in "kube-system" namespace has status "Ready":"True"
	I1226 22:58:46.728355    4740 pod_ready.go:81] duration metric: took 9.5982ms waiting for pod "kube-proxy-hzcqb" in "kube-system" namespace to be "Ready" ...
	I1226 22:58:46.728355    4740 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 22:58:46.878443    4740 request.go:629] Waited for 149.834ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-455300
	I1226 22:58:46.878519    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-455300
	I1226 22:58:46.878519    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:46.878584    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:46.878584    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:46.886413    4740 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 22:58:46.886826    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:46.886826    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:46 GMT
	I1226 22:58:46.886826    4740 round_trippers.go:580]     Audit-Id: fe455fd2-08ab-4763-b751-d5df5a3617ab
	I1226 22:58:46.886826    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:46.886903    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:46.886903    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:46.886903    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:46.887100    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-455300","namespace":"kube-system","uid":"58252b0c-41ef-43ab-b2e8-4bd2a1b21cb1","resourceVersion":"411","creationTimestamp":"2023-12-26T22:58:17Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ebadb7d5fc522150603669ea98264147","kubernetes.io/config.mirror":"ebadb7d5fc522150603669ea98264147","kubernetes.io/config.seen":"2023-12-26T22:58:16.785831210Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4695 chars]
	I1226 22:58:47.086642    4740 request.go:629] Waited for 198.5773ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:47.086883    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 22:58:47.086987    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:47.086987    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:47.086987    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:47.092603    4740 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 22:58:47.092603    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:47.092603    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:47 GMT
	I1226 22:58:47.092603    4740 round_trippers.go:580]     Audit-Id: 20f45247-9960-4d7d-9098-8b7f7c766e03
	I1226 22:58:47.092881    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:47.092881    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:47.092881    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:47.092881    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:47.093074    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"432","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1226 22:58:47.093730    4740 pod_ready.go:92] pod "kube-scheduler-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 22:58:47.093799    4740 pod_ready.go:81] duration metric: took 365.4441ms waiting for pod "kube-scheduler-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 22:58:47.093799    4740 pod_ready.go:38] duration metric: took 2.9338803s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:58:47.093861    4740 api_server.go:52] waiting for apiserver process to appear ...
	I1226 22:58:47.107199    4740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 22:58:47.130872    4740 command_runner.go:130] > 2058
	I1226 22:58:47.131328    4740 api_server.go:72] duration metric: took 17.2005311s to wait for apiserver process to appear ...
	I1226 22:58:47.131328    4740 api_server.go:88] waiting for apiserver healthz status ...
	I1226 22:58:47.131418    4740 api_server.go:253] Checking apiserver healthz at https://172.21.184.4:8443/healthz ...
	I1226 22:58:47.140014    4740 api_server.go:279] https://172.21.184.4:8443/healthz returned 200:
	ok
	I1226 22:58:47.140391    4740 round_trippers.go:463] GET https://172.21.184.4:8443/version
	I1226 22:58:47.140391    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:47.140448    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:47.140448    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:47.141824    4740 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:58:47.141824    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:47.141824    4740 round_trippers.go:580]     Audit-Id: 9a7fcf87-4848-44fd-9c60-4871e4a8f9d9
	I1226 22:58:47.141824    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:47.141824    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:47.141824    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:47.141824    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:47.141824    4740 round_trippers.go:580]     Content-Length: 264
	I1226 22:58:47.141824    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:47 GMT
	I1226 22:58:47.141824    4740 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1226 22:58:47.143085    4740 api_server.go:141] control plane version: v1.28.4
	I1226 22:58:47.143085    4740 api_server.go:131] duration metric: took 11.7568ms to wait for apiserver health ...
	I1226 22:58:47.143189    4740 system_pods.go:43] waiting for kube-system pods to appear ...
	I1226 22:58:47.287475    4740 request.go:629] Waited for 144.1876ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods
	I1226 22:58:47.288030    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods
	I1226 22:58:47.288030    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:47.288030    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:47.288030    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:47.293697    4740 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 22:58:47.293697    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:47.293806    4740 round_trippers.go:580]     Audit-Id: 2d0fe695-70b7-41d8-ab9b-025450e7c217
	I1226 22:58:47.293806    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:47.293806    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:47.293806    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:47.293860    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:47.293860    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:47 GMT
	I1226 22:58:47.295028    4740 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"451","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54048 chars]
	I1226 22:58:47.297714    4740 system_pods.go:59] 8 kube-system pods found
	I1226 22:58:47.297714    4740 system_pods.go:61] "coredns-5dd5756b68-fj9bd" [fbc5229e-2af2-4e17-b23c-ebf836a42aa2] Running
	I1226 22:58:47.297714    4740 system_pods.go:61] "etcd-multinode-455300" [74a3baac-66f8-4934-bdb2-a8a34de26d03] Running
	I1226 22:58:47.297714    4740 system_pods.go:61] "kindnet-zxd45" [686e296b-23ae-4a1e-bc14-2dea164b0c29] Running
	I1226 22:58:47.297714    4740 system_pods.go:61] "kube-apiserver-multinode-455300" [001f1489-e4c6-4a35-9c04-992ddd0eea29] Running
	I1226 22:58:47.297714    4740 system_pods.go:61] "kube-controller-manager-multinode-455300" [fdaf236b-e792-4278-908c-34b337b97beb] Running
	I1226 22:58:47.297714    4740 system_pods.go:61] "kube-proxy-hzcqb" [0027fd42-fa64-4d1d-acc8-36e7b41e4838] Running
	I1226 22:58:47.297714    4740 system_pods.go:61] "kube-scheduler-multinode-455300" [58252b0c-41ef-43ab-b2e8-4bd2a1b21cb1] Running
	I1226 22:58:47.297714    4740 system_pods.go:61] "storage-provisioner" [e274f19d-1940-400d-b887-aaf390e64fdd] Running
	I1226 22:58:47.297821    4740 system_pods.go:74] duration metric: took 154.5247ms to wait for pod list to return data ...
	I1226 22:58:47.297821    4740 default_sa.go:34] waiting for default service account to be created ...
	I1226 22:58:47.476834    4740 request.go:629] Waited for 178.9068ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.184.4:8443/api/v1/namespaces/default/serviceaccounts
	I1226 22:58:47.477025    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/default/serviceaccounts
	I1226 22:58:47.477025    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:47.477025    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:47.477025    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:47.485131    4740 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 22:58:47.485131    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:47.485131    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:47.485131    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:47.485131    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:47.485131    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:47.485131    4740 round_trippers.go:580]     Content-Length: 261
	I1226 22:58:47.485131    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:47 GMT
	I1226 22:58:47.485131    4740 round_trippers.go:580]     Audit-Id: 5fc94eed-74cd-4703-9741-dd87dd6d4bbe
	I1226 22:58:47.485131    4740 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"457"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"52815640-9603-4e59-b38b-e19ec6f4b307","resourceVersion":"349","creationTimestamp":"2023-12-26T22:58:29Z"}}]}
	I1226 22:58:47.485131    4740 default_sa.go:45] found service account: "default"
	I1226 22:58:47.485131    4740 default_sa.go:55] duration metric: took 187.3097ms for default service account to be created ...
	I1226 22:58:47.485131    4740 system_pods.go:116] waiting for k8s-apps to be running ...
	I1226 22:58:47.678926    4740 request.go:629] Waited for 192.8913ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods
	I1226 22:58:47.679222    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods
	I1226 22:58:47.679222    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:47.679222    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:47.679222    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:47.685793    4740 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 22:58:47.685793    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:47.685793    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:47 GMT
	I1226 22:58:47.685793    4740 round_trippers.go:580]     Audit-Id: 3f1da1d9-9f5d-40e3-b293-e2fecee647fa
	I1226 22:58:47.686104    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:47.686104    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:47.686104    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:47.686104    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:47.687551    4740 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"457"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"451","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54048 chars]
	I1226 22:58:47.690348    4740 system_pods.go:86] 8 kube-system pods found
	I1226 22:58:47.690415    4740 system_pods.go:89] "coredns-5dd5756b68-fj9bd" [fbc5229e-2af2-4e17-b23c-ebf836a42aa2] Running
	I1226 22:58:47.690415    4740 system_pods.go:89] "etcd-multinode-455300" [74a3baac-66f8-4934-bdb2-a8a34de26d03] Running
	I1226 22:58:47.690415    4740 system_pods.go:89] "kindnet-zxd45" [686e296b-23ae-4a1e-bc14-2dea164b0c29] Running
	I1226 22:58:47.690466    4740 system_pods.go:89] "kube-apiserver-multinode-455300" [001f1489-e4c6-4a35-9c04-992ddd0eea29] Running
	I1226 22:58:47.690466    4740 system_pods.go:89] "kube-controller-manager-multinode-455300" [fdaf236b-e792-4278-908c-34b337b97beb] Running
	I1226 22:58:47.690466    4740 system_pods.go:89] "kube-proxy-hzcqb" [0027fd42-fa64-4d1d-acc8-36e7b41e4838] Running
	I1226 22:58:47.690466    4740 system_pods.go:89] "kube-scheduler-multinode-455300" [58252b0c-41ef-43ab-b2e8-4bd2a1b21cb1] Running
	I1226 22:58:47.690466    4740 system_pods.go:89] "storage-provisioner" [e274f19d-1940-400d-b887-aaf390e64fdd] Running
	I1226 22:58:47.690466    4740 system_pods.go:126] duration metric: took 205.3351ms to wait for k8s-apps to be running ...
	I1226 22:58:47.690466    4740 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 22:58:47.701509    4740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:58:47.724183    4740 system_svc.go:56] duration metric: took 33.7165ms WaitForService to wait for kubelet.
	I1226 22:58:47.724183    4740 kubeadm.go:581] duration metric: took 17.7933858s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 22:58:47.724183    4740 node_conditions.go:102] verifying NodePressure condition ...
	I1226 22:58:47.881265    4740 request.go:629] Waited for 157.0826ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.184.4:8443/api/v1/nodes
	I1226 22:58:47.881265    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes
	I1226 22:58:47.881265    4740 round_trippers.go:469] Request Headers:
	I1226 22:58:47.881265    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:58:47.881265    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 22:58:47.885261    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:58:47.885261    4740 round_trippers.go:577] Response Headers:
	I1226 22:58:47.885261    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:58:47 GMT
	I1226 22:58:47.885261    4740 round_trippers.go:580]     Audit-Id: 48861a65-254b-4bfd-89df-8162688f75dc
	I1226 22:58:47.885261    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:58:47.885261    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 22:58:47.885261    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 22:58:47.885261    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 22:58:47.885261    4740 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"458"},"items":[{"metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"432","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4834 chars]
	I1226 22:58:47.886269    4740 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 22:58:47.886269    4740 node_conditions.go:123] node cpu capacity is 2
	I1226 22:58:47.886269    4740 node_conditions.go:105] duration metric: took 162.0865ms to run NodePressure ...
	I1226 22:58:47.886269    4740 start.go:228] waiting for startup goroutines ...
	I1226 22:58:47.886269    4740 start.go:233] waiting for cluster config update ...
	I1226 22:58:47.886269    4740 start.go:242] writing updated cluster config ...
	I1226 22:58:47.891277    4740 out.go:177] 
	I1226 22:58:47.901271    4740 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 22:58:47.901271    4740 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 22:58:47.908270    4740 out.go:177] * Starting worker node multinode-455300-m02 in cluster multinode-455300
	I1226 22:58:47.910263    4740 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 22:58:47.910263    4740 cache.go:56] Caching tarball of preloaded images
	I1226 22:58:47.911261    4740 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1226 22:58:47.911261    4740 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1226 22:58:47.911261    4740 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 22:58:47.915277    4740 start.go:365] acquiring machines lock for multinode-455300-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1226 22:58:47.915277    4740 start.go:369] acquired machines lock for "multinode-455300-m02" in 0s
	I1226 22:58:47.916286    4740 start.go:93] Provisioning new machine with config: &{Name:multinode-455300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-455300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.21.184.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1226 22:58:47.916286    4740 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1226 22:58:47.920259    4740 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1226 22:58:47.920259    4740 start.go:159] libmachine.API.Create for "multinode-455300" (driver="hyperv")
	I1226 22:58:47.920259    4740 client.go:168] LocalClient.Create starting
	I1226 22:58:47.920259    4740 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1226 22:58:47.921278    4740 main.go:141] libmachine: Decoding PEM data...
	I1226 22:58:47.921278    4740 main.go:141] libmachine: Parsing certificate...
	I1226 22:58:47.921278    4740 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1226 22:58:47.921278    4740 main.go:141] libmachine: Decoding PEM data...
	I1226 22:58:47.921278    4740 main.go:141] libmachine: Parsing certificate...
	I1226 22:58:47.921278    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1226 22:58:49.840818    4740 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1226 22:58:49.840818    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:58:49.840939    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1226 22:58:51.592355    4740 main.go:141] libmachine: [stdout =====>] : False
	
	I1226 22:58:51.592669    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:58:51.592669    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1226 22:58:53.122397    4740 main.go:141] libmachine: [stdout =====>] : True
	
	I1226 22:58:53.122729    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:58:53.123071    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1226 22:58:56.809426    4740 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1226 22:58:56.809521    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:58:56.812542    4740 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I1226 22:58:57.322738    4740 main.go:141] libmachine: Creating SSH key...
	I1226 22:58:57.476532    4740 main.go:141] libmachine: Creating VM...
	I1226 22:58:57.476532    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1226 22:59:00.459792    4740 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1226 22:59:00.460152    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:00.460152    4740 main.go:141] libmachine: Using switch "Default Switch"
	I1226 22:59:00.460152    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1226 22:59:02.286133    4740 main.go:141] libmachine: [stdout =====>] : True
	
	I1226 22:59:02.286403    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:02.286403    4740 main.go:141] libmachine: Creating VHD
	I1226 22:59:02.286685    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1226 22:59:06.030820    4740 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F2393C35-FD9C-4B40-9127-2A2512AA3A02
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1226 22:59:06.030820    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:06.030820    4740 main.go:141] libmachine: Writing magic tar header
	I1226 22:59:06.030965    4740 main.go:141] libmachine: Writing SSH key tar header
	I1226 22:59:06.043572    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1226 22:59:09.251316    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:59:09.251672    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:09.251762    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\disk.vhd' -SizeBytes 20000MB
	I1226 22:59:11.835185    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:59:11.835347    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:11.835347    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-455300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1226 22:59:15.643038    4740 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-455300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1226 22:59:15.643243    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:15.643243    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-455300-m02 -DynamicMemoryEnabled $false
	I1226 22:59:17.953907    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:59:17.954175    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:17.954175    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-455300-m02 -Count 2
	I1226 22:59:20.263501    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:59:20.263501    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:20.263615    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-455300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\boot2docker.iso'
	I1226 22:59:22.951693    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:59:22.951693    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:22.951821    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-455300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\disk.vhd'
	I1226 22:59:25.667209    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:59:25.667209    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:25.667299    4740 main.go:141] libmachine: Starting VM...
	I1226 22:59:25.667299    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-455300-m02
	I1226 22:59:28.804773    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:59:28.804851    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:28.804851    4740 main.go:141] libmachine: Waiting for host to start...
	I1226 22:59:28.804851    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 22:59:31.145005    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:59:31.145005    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:31.145005    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 22:59:33.739265    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:59:33.739265    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:34.752578    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 22:59:36.978842    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:59:36.979149    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:36.979413    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 22:59:39.564737    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:59:39.564780    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:40.570679    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 22:59:42.800744    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:59:42.800744    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:42.800850    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 22:59:45.358243    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:59:45.358677    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:46.372968    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 22:59:48.601524    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:59:48.601524    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:48.601639    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 22:59:51.125626    4740 main.go:141] libmachine: [stdout =====>] : 
	I1226 22:59:51.125951    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:52.127577    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 22:59:54.376145    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:59:54.376145    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:54.376145    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 22:59:57.046272    4740 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 22:59:57.046272    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:57.046393    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 22:59:59.215209    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 22:59:59.215209    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 22:59:59.215541    4740 machine.go:88] provisioning docker machine ...
	I1226 22:59:59.215541    4740 buildroot.go:166] provisioning hostname "multinode-455300-m02"
	I1226 22:59:59.215650    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:00:01.402457    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:00:01.402634    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:01.402634    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:00:04.019623    4740 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 23:00:04.019729    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:04.023544    4740 main.go:141] libmachine: Using SSH client type: native
	I1226 23:00:04.039973    4740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.58 22 <nil> <nil>}
	I1226 23:00:04.040100    4740 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-455300-m02 && echo "multinode-455300-m02" | sudo tee /etc/hostname
	I1226 23:00:04.222222    4740 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-455300-m02
	
	I1226 23:00:04.222222    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:00:06.428519    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:00:06.428519    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:06.428622    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:00:09.042605    4740 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 23:00:09.042605    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:09.047931    4740 main.go:141] libmachine: Using SSH client type: native
	I1226 23:00:09.048646    4740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.58 22 <nil> <nil>}
	I1226 23:00:09.048646    4740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-455300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-455300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-455300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 23:00:09.220064    4740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 23:00:09.220064    4740 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1226 23:00:09.220064    4740 buildroot.go:174] setting up certificates
	I1226 23:00:09.220064    4740 provision.go:83] configureAuth start
	I1226 23:00:09.220064    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:00:11.420001    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:00:11.420001    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:11.420097    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:00:14.023045    4740 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 23:00:14.023479    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:14.023479    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:00:16.262886    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:00:16.262886    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:16.262886    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:00:18.868933    4740 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 23:00:18.869113    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:18.869113    4740 provision.go:138] copyHostCerts
	I1226 23:00:18.869310    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1226 23:00:18.869647    4740 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1226 23:00:18.869722    4740 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1226 23:00:18.870213    4740 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I1226 23:00:18.871382    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1226 23:00:18.871541    4740 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1226 23:00:18.871541    4740 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1226 23:00:18.872134    4740 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1226 23:00:18.872757    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1226 23:00:18.872757    4740 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1226 23:00:18.873390    4740 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1226 23:00:18.873670    4740 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1226 23:00:18.875368    4740 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-455300-m02 san=[172.21.187.58 172.21.187.58 localhost 127.0.0.1 minikube multinode-455300-m02]
	I1226 23:00:19.012064    4740 provision.go:172] copyRemoteCerts
	I1226 23:00:19.027308    4740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 23:00:19.027308    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:00:21.285373    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:00:21.285574    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:21.285574    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:00:23.935778    4740 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 23:00:23.935818    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:23.936091    4740 sshutil.go:53] new ssh client: &{IP:172.21.187.58 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\id_rsa Username:docker}
	I1226 23:00:24.044914    4740 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0176061s)
	I1226 23:00:24.045060    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1226 23:00:24.045568    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 23:00:24.087998    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1226 23:00:24.088486    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I1226 23:00:24.132056    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1226 23:00:24.132246    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1226 23:00:24.173817    4740 provision.go:86] duration metric: configureAuth took 14.9537542s
	I1226 23:00:24.173817    4740 buildroot.go:189] setting minikube options for container-runtime
	I1226 23:00:24.173817    4740 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:00:24.174817    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:00:26.380044    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:00:26.380208    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:26.380208    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:00:29.040969    4740 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 23:00:29.040969    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:29.046529    4740 main.go:141] libmachine: Using SSH client type: native
	I1226 23:00:29.047246    4740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.58 22 <nil> <nil>}
	I1226 23:00:29.047246    4740 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1226 23:00:29.205090    4740 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1226 23:00:29.205090    4740 buildroot.go:70] root file system type: tmpfs
	I1226 23:00:29.205413    4740 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1226 23:00:29.205453    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:00:31.439346    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:00:31.439346    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:31.439744    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:00:34.035402    4740 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 23:00:34.035402    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:34.043079    4740 main.go:141] libmachine: Using SSH client type: native
	I1226 23:00:34.044537    4740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.58 22 <nil> <nil>}
	I1226 23:00:34.044666    4740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.21.184.4"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1226 23:00:34.220704    4740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.21.184.4
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1226 23:00:34.221255    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:00:36.400680    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:00:36.401021    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:36.401100    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:00:38.989104    4740 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 23:00:38.989197    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:38.996320    4740 main.go:141] libmachine: Using SSH client type: native
	I1226 23:00:38.997168    4740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.58 22 <nil> <nil>}
	I1226 23:00:38.997168    4740 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1226 23:00:40.184128    4740 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1226 23:00:40.184128    4740 machine.go:91] provisioned docker machine in 40.9685905s
	I1226 23:00:40.184128    4740 client.go:171] LocalClient.Create took 1m52.2638796s
	I1226 23:00:40.184128    4740 start.go:167] duration metric: libmachine.API.Create for "multinode-455300" took 1m52.2638796s
	I1226 23:00:40.184128    4740 start.go:300] post-start starting for "multinode-455300-m02" (driver="hyperv")
	I1226 23:00:40.184128    4740 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 23:00:40.196807    4740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 23:00:40.196807    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:00:42.348587    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:00:42.348587    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:42.348695    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:00:44.931300    4740 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 23:00:44.931300    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:44.931658    4740 sshutil.go:53] new ssh client: &{IP:172.21.187.58 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\id_rsa Username:docker}
	I1226 23:00:45.057836    4740 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8601459s)
	I1226 23:00:45.071792    4740 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 23:00:45.077797    4740 command_runner.go:130] > NAME=Buildroot
	I1226 23:00:45.077797    4740 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I1226 23:00:45.077797    4740 command_runner.go:130] > ID=buildroot
	I1226 23:00:45.077797    4740 command_runner.go:130] > VERSION_ID=2021.02.12
	I1226 23:00:45.077797    4740 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1226 23:00:45.077797    4740 info.go:137] Remote host: Buildroot 2021.02.12
	I1226 23:00:45.077797    4740 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1226 23:00:45.078796    4740 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1226 23:00:45.079938    4740 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> 107282.pem in /etc/ssl/certs
	I1226 23:00:45.079938    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> /etc/ssl/certs/107282.pem
	I1226 23:00:45.093938    4740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 23:00:45.111369    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /etc/ssl/certs/107282.pem (1708 bytes)
	I1226 23:00:45.152284    4740 start.go:303] post-start completed in 4.9680888s
	I1226 23:00:45.155683    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:00:47.345011    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:00:47.345011    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:47.345131    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:00:49.964867    4740 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 23:00:49.965219    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:49.965474    4740 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 23:00:49.967887    4740 start.go:128] duration metric: createHost completed in 2m2.0516133s
	I1226 23:00:49.967887    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:00:52.161391    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:00:52.161682    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:52.161682    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:00:54.765080    4740 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 23:00:54.765080    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:54.770970    4740 main.go:141] libmachine: Using SSH client type: native
	I1226 23:00:54.771615    4740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.58 22 <nil> <nil>}
	I1226 23:00:54.771615    4740 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1226 23:00:54.926788    4740 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703631654.924250043
	
	I1226 23:00:54.926978    4740 fix.go:206] guest clock: 1703631654.924250043
	I1226 23:00:54.926978    4740 fix.go:219] Guest: 2023-12-26 23:00:54.924250043 +0000 UTC Remote: 2023-12-26 23:00:49.9678878 +0000 UTC m=+335.777960601 (delta=4.956362243s)
	I1226 23:00:54.927092    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:00:57.094959    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:00:57.095352    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:57.095352    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:00:59.672842    4740 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 23:00:59.672842    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:00:59.679166    4740 main.go:141] libmachine: Using SSH client type: native
	I1226 23:00:59.680119    4740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.58 22 <nil> <nil>}
	I1226 23:00:59.680119    4740 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1703631654
	I1226 23:00:59.844699    4740 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 26 23:00:54 UTC 2023
	
	I1226 23:00:59.844699    4740 fix.go:226] clock set: Tue Dec 26 23:00:54 UTC 2023
	 (err=<nil>)
	I1226 23:00:59.844699    4740 start.go:83] releasing machines lock for "multinode-455300-m02", held for 2m11.9294359s
	I1226 23:00:59.844699    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:01:02.038270    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:01:02.038647    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:01:02.038647    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:01:04.623142    4740 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 23:01:04.623142    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:01:04.626695    4740 out.go:177] * Found network options:
	I1226 23:01:04.629972    4740 out.go:177]   - NO_PROXY=172.21.184.4
	W1226 23:01:04.632338    4740 proxy.go:119] fail to check proxy env: Error ip not in block
	I1226 23:01:04.634927    4740 out.go:177]   - NO_PROXY=172.21.184.4
	W1226 23:01:04.637491    4740 proxy.go:119] fail to check proxy env: Error ip not in block
	W1226 23:01:04.638970    4740 proxy.go:119] fail to check proxy env: Error ip not in block
	I1226 23:01:04.641987    4740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 23:01:04.642087    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:01:04.652950    4740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 23:01:04.652950    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:01:06.879242    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:01:06.879242    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:01:06.879242    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:01:06.879242    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:01:06.879242    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:01:06.879947    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:01:09.635316    4740 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 23:01:09.635886    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:01:09.635948    4740 sshutil.go:53] new ssh client: &{IP:172.21.187.58 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\id_rsa Username:docker}
	I1226 23:01:09.654289    4740 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 23:01:09.654289    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:01:09.654538    4740 sshutil.go:53] new ssh client: &{IP:172.21.187.58 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\id_rsa Username:docker}
	I1226 23:01:09.821414    4740 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1226 23:01:09.821414    4740 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1226 23:01:09.821414    4740 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1793273s)
	I1226 23:01:09.821515    4740 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1685651s)
	W1226 23:01:09.821515    4740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1226 23:01:09.835265    4740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 23:01:09.861052    4740 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1226 23:01:09.861994    4740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1226 23:01:09.861994    4740 start.go:475] detecting cgroup driver to use...
	I1226 23:01:09.862159    4740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 23:01:09.899526    4740 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1226 23:01:09.911469    4740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1226 23:01:09.947926    4740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1226 23:01:09.965390    4740 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1226 23:01:09.977912    4740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1226 23:01:10.007635    4740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 23:01:10.038649    4740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1226 23:01:10.069857    4740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 23:01:10.101346    4740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 23:01:10.144461    4740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1226 23:01:10.174108    4740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 23:01:10.190396    4740 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1226 23:01:10.201995    4740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 23:01:10.231078    4740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:01:10.406741    4740 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1226 23:01:10.436262    4740 start.go:475] detecting cgroup driver to use...
	I1226 23:01:10.448161    4740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1226 23:01:10.469958    4740 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1226 23:01:10.469958    4740 command_runner.go:130] > [Unit]
	I1226 23:01:10.470020    4740 command_runner.go:130] > Description=Docker Application Container Engine
	I1226 23:01:10.470020    4740 command_runner.go:130] > Documentation=https://docs.docker.com
	I1226 23:01:10.470020    4740 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1226 23:01:10.470020    4740 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1226 23:01:10.470020    4740 command_runner.go:130] > StartLimitBurst=3
	I1226 23:01:10.470078    4740 command_runner.go:130] > StartLimitIntervalSec=60
	I1226 23:01:10.470078    4740 command_runner.go:130] > [Service]
	I1226 23:01:10.470078    4740 command_runner.go:130] > Type=notify
	I1226 23:01:10.470078    4740 command_runner.go:130] > Restart=on-failure
	I1226 23:01:10.470078    4740 command_runner.go:130] > Environment=NO_PROXY=172.21.184.4
	I1226 23:01:10.470078    4740 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1226 23:01:10.470148    4740 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1226 23:01:10.470169    4740 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1226 23:01:10.470169    4740 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1226 23:01:10.470169    4740 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1226 23:01:10.470169    4740 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1226 23:01:10.470169    4740 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1226 23:01:10.470228    4740 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1226 23:01:10.470228    4740 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1226 23:01:10.470296    4740 command_runner.go:130] > ExecStart=
	I1226 23:01:10.470365    4740 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1226 23:01:10.470365    4740 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1226 23:01:10.470365    4740 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1226 23:01:10.470426    4740 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1226 23:01:10.470426    4740 command_runner.go:130] > LimitNOFILE=infinity
	I1226 23:01:10.470426    4740 command_runner.go:130] > LimitNPROC=infinity
	I1226 23:01:10.470426    4740 command_runner.go:130] > LimitCORE=infinity
	I1226 23:01:10.470426    4740 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1226 23:01:10.470426    4740 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1226 23:01:10.470426    4740 command_runner.go:130] > TasksMax=infinity
	I1226 23:01:10.470426    4740 command_runner.go:130] > TimeoutStartSec=0
	I1226 23:01:10.470485    4740 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1226 23:01:10.470485    4740 command_runner.go:130] > Delegate=yes
	I1226 23:01:10.470485    4740 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1226 23:01:10.470485    4740 command_runner.go:130] > KillMode=process
	I1226 23:01:10.470485    4740 command_runner.go:130] > [Install]
	I1226 23:01:10.470485    4740 command_runner.go:130] > WantedBy=multi-user.target
	I1226 23:01:10.484464    4740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 23:01:10.513441    4740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 23:01:10.560811    4740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 23:01:10.593784    4740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1226 23:01:10.627785    4740 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1226 23:01:10.694032    4740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1226 23:01:10.714666    4740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 23:01:10.742663    4740 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1226 23:01:10.757807    4740 ssh_runner.go:195] Run: which cri-dockerd
	I1226 23:01:10.763249    4740 command_runner.go:130] > /usr/bin/cri-dockerd
	I1226 23:01:10.777493    4740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1226 23:01:10.791387    4740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1226 23:01:10.829016    4740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1226 23:01:11.002094    4740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1226 23:01:11.167761    4740 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1226 23:01:11.167954    4740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1226 23:01:11.209864    4740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:01:11.382629    4740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1226 23:01:12.961866    4740 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5792369s)
	I1226 23:01:12.974862    4740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1226 23:01:13.145110    4740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1226 23:01:13.314762    4740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1226 23:01:13.492464    4740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:01:13.668704    4740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1226 23:01:13.710124    4740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:01:13.887243    4740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1226 23:01:14.000525    4740 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1226 23:01:14.014543    4740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1226 23:01:14.022855    4740 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1226 23:01:14.022919    4740 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1226 23:01:14.022919    4740 command_runner.go:130] > Device: 16h/22d	Inode: 886         Links: 1
	I1226 23:01:14.022988    4740 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1226 23:01:14.022988    4740 command_runner.go:130] > Access: 2023-12-26 23:01:13.905646469 +0000
	I1226 23:01:14.022988    4740 command_runner.go:130] > Modify: 2023-12-26 23:01:13.905646469 +0000
	I1226 23:01:14.022988    4740 command_runner.go:130] > Change: 2023-12-26 23:01:13.909646469 +0000
	I1226 23:01:14.022988    4740 command_runner.go:130] >  Birth: -
	I1226 23:01:14.023050    4740 start.go:543] Will wait 60s for crictl version
	I1226 23:01:14.036853    4740 ssh_runner.go:195] Run: which crictl
	I1226 23:01:14.040905    4740 command_runner.go:130] > /usr/bin/crictl
	I1226 23:01:14.055142    4740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 23:01:14.131878    4740 command_runner.go:130] > Version:  0.1.0
	I1226 23:01:14.131878    4740 command_runner.go:130] > RuntimeName:  docker
	I1226 23:01:14.131878    4740 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1226 23:01:14.131878    4740 command_runner.go:130] > RuntimeApiVersion:  v1
	I1226 23:01:14.133869    4740 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1226 23:01:14.144860    4740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1226 23:01:14.182006    4740 command_runner.go:130] > 24.0.7
	I1226 23:01:14.192528    4740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1226 23:01:14.230534    4740 command_runner.go:130] > 24.0.7
	I1226 23:01:14.234525    4740 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1226 23:01:14.237661    4740 out.go:177]   - env NO_PROXY=172.21.184.4
	I1226 23:01:14.239530    4740 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1226 23:01:14.243523    4740 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1226 23:01:14.243523    4740 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1226 23:01:14.243523    4740 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1226 23:01:14.243523    4740 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4e:ec:d4 Flags:up|broadcast|multicast|running}
	I1226 23:01:14.246523    4740 ip.go:210] interface addr: fe80::1f69:6bdb:2000:8fcd/64
	I1226 23:01:14.246523    4740 ip.go:210] interface addr: 172.21.176.1/20
	I1226 23:01:14.259525    4740 ssh_runner.go:195] Run: grep 172.21.176.1	host.minikube.internal$ /etc/hosts
	I1226 23:01:14.265817    4740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.21.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 23:01:14.285777    4740 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300 for IP: 172.21.187.58
	I1226 23:01:14.285891    4740 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 23:01:14.286821    4740 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1226 23:01:14.287316    4740 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1226 23:01:14.287493    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1226 23:01:14.287691    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1226 23:01:14.287853    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1226 23:01:14.287853    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1226 23:01:14.288786    4740 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem (1338 bytes)
	W1226 23:01:14.289085    4740 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728_empty.pem, impossibly tiny 0 bytes
	I1226 23:01:14.289085    4740 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1226 23:01:14.289085    4740 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1226 23:01:14.289674    4740 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1226 23:01:14.290034    4740 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1226 23:01:14.290593    4740 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem (1708 bytes)
	I1226 23:01:14.290800    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> /usr/share/ca-certificates/107282.pem
	I1226 23:01:14.291056    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:01:14.291228    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem -> /usr/share/ca-certificates/10728.pem
	I1226 23:01:14.291891    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 23:01:14.331591    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 23:01:14.369125    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 23:01:14.410317    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1226 23:01:14.448169    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /usr/share/ca-certificates/107282.pem (1708 bytes)
	I1226 23:01:14.490391    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 23:01:14.539485    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem --> /usr/share/ca-certificates/10728.pem (1338 bytes)
	I1226 23:01:14.594639    4740 ssh_runner.go:195] Run: openssl version
	I1226 23:01:14.603897    4740 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1226 23:01:14.618867    4740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 23:01:14.650532    4740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:01:14.657202    4740 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 26 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:01:14.657304    4740 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:01:14.670134    4740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:01:14.677257    4740 command_runner.go:130] > b5213941
	I1226 23:01:14.690729    4740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 23:01:14.721047    4740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10728.pem && ln -fs /usr/share/ca-certificates/10728.pem /etc/ssl/certs/10728.pem"
	I1226 23:01:14.751913    4740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10728.pem
	I1226 23:01:14.758638    4740 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 26 22:04 /usr/share/ca-certificates/10728.pem
	I1226 23:01:14.758638    4740 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 26 22:04 /usr/share/ca-certificates/10728.pem
	I1226 23:01:14.771749    4740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10728.pem
	I1226 23:01:14.779427    4740 command_runner.go:130] > 51391683
	I1226 23:01:14.793747    4740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10728.pem /etc/ssl/certs/51391683.0"
	I1226 23:01:14.824194    4740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107282.pem && ln -fs /usr/share/ca-certificates/107282.pem /etc/ssl/certs/107282.pem"
	I1226 23:01:14.859680    4740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107282.pem
	I1226 23:01:14.866208    4740 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 26 22:04 /usr/share/ca-certificates/107282.pem
	I1226 23:01:14.866208    4740 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 26 22:04 /usr/share/ca-certificates/107282.pem
	I1226 23:01:14.881024    4740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107282.pem
	I1226 23:01:14.888758    4740 command_runner.go:130] > 3ec20f2e
	I1226 23:01:14.902131    4740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107282.pem /etc/ssl/certs/3ec20f2e.0"
	I1226 23:01:14.931850    4740 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 23:01:14.937931    4740 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 23:01:14.937931    4740 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 23:01:14.948292    4740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1226 23:01:14.985996    4740 command_runner.go:130] > cgroupfs
	I1226 23:01:14.986904    4740 cni.go:84] Creating CNI manager for ""
	I1226 23:01:14.986904    4740 cni.go:136] 2 nodes found, recommending kindnet
	I1226 23:01:14.986904    4740 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 23:01:14.987087    4740 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.21.187.58 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-455300 NodeName:multinode-455300-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.21.184.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.21.187.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1226 23:01:14.987322    4740 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.21.187.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-455300-m02"
	  kubeletExtraArgs:
	    node-ip: 172.21.187.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.21.184.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 23:01:14.987451    4740 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-455300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.21.187.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-455300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 23:01:15.001170    4740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1226 23:01:15.016399    4740 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I1226 23:01:15.017039    4740 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I1226 23:01:15.030692    4740 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I1226 23:01:15.050813    4740 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl
	I1226 23:01:15.050926    4740 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm
	I1226 23:01:15.050926    4740 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet
	I1226 23:01:16.097856    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I1226 23:01:16.109862    4740 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I1226 23:01:16.117063    4740 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1226 23:01:16.117410    4740 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1226 23:01:16.117410    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I1226 23:01:19.124506    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1226 23:01:19.136839    4740 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1226 23:01:19.145882    4740 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1226 23:01:19.146828    4740 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1226 23:01:19.147050    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I1226 23:01:23.375901    4740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 23:01:23.396822    4740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I1226 23:01:23.410633    4740 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I1226 23:01:23.415641    4740 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1226 23:01:23.415641    4740 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1226 23:01:23.415641    4740 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I1226 23:01:24.180917    4740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1226 23:01:24.196807    4740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1226 23:01:24.222604    4740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1226 23:01:24.265314    4740 ssh_runner.go:195] Run: grep 172.21.184.4	control-plane.minikube.internal$ /etc/hosts
	I1226 23:01:24.272510    4740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.21.184.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 23:01:24.291902    4740 host.go:66] Checking if "multinode-455300" exists ...
	I1226 23:01:24.292170    4740 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:01:24.292170    4740 start.go:304] JoinCluster: &{Name:multinode-455300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-455300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.21.184.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.21.187.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 23:01:24.292726    4740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1226 23:01:24.292726    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:01:26.455614    4740 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:01:26.455684    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:01:26.455803    4740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:01:29.046210    4740 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 23:01:29.046296    4740 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:01:29.046296    4740 sshutil.go:53] new ssh client: &{IP:172.21.184.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 23:01:29.282223    4740 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 2osei4.qomuvt9g4gg3mz05 --discovery-token-ca-cert-hash sha256:59aae3d8e090ab5d371ccfd9b23acdfdebb77e43326e89bce43646e9b1304925 
	I1226 23:01:29.282474    4740 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9897494s)
	I1226 23:01:29.282474    4740 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.21.187.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1226 23:01:29.282474    4740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2osei4.qomuvt9g4gg3mz05 --discovery-token-ca-cert-hash sha256:59aae3d8e090ab5d371ccfd9b23acdfdebb77e43326e89bce43646e9b1304925 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-455300-m02"
	I1226 23:01:29.349091    4740 command_runner.go:130] ! W1226 23:01:29.346715    1366 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1226 23:01:29.544386    4740 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 23:01:32.331678    4740 command_runner.go:130] > [preflight] Running pre-flight checks
	I1226 23:01:32.332630    4740 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1226 23:01:32.332718    4740 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1226 23:01:32.332718    4740 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 23:01:32.332718    4740 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 23:01:32.332718    4740 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1226 23:01:32.332718    4740 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1226 23:01:32.332718    4740 command_runner.go:130] > This node has joined the cluster:
	I1226 23:01:32.332718    4740 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1226 23:01:32.332718    4740 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1226 23:01:32.332821    4740 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1226 23:01:32.332876    4740 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2osei4.qomuvt9g4gg3mz05 --discovery-token-ca-cert-hash sha256:59aae3d8e090ab5d371ccfd9b23acdfdebb77e43326e89bce43646e9b1304925 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-455300-m02": (3.0503467s)
	I1226 23:01:32.332876    4740 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1226 23:01:32.523044    4740 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1226 23:01:32.713024    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b minikube.k8s.io/name=multinode-455300 minikube.k8s.io/updated_at=2023_12_26T23_01_32_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 23:01:32.860419    4740 command_runner.go:130] > node/multinode-455300-m02 labeled
	I1226 23:01:32.860572    4740 start.go:306] JoinCluster complete in 8.5682494s
	I1226 23:01:32.860659    4740 cni.go:84] Creating CNI manager for ""
	I1226 23:01:32.860659    4740 cni.go:136] 2 nodes found, recommending kindnet
	I1226 23:01:32.873292    4740 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1226 23:01:32.884204    4740 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1226 23:01:32.884204    4740 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1226 23:01:32.884204    4740 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1226 23:01:32.884204    4740 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 23:01:32.884204    4740 command_runner.go:130] > Access: 2023-12-26 22:56:26.529435000 +0000
	I1226 23:01:32.884204    4740 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I1226 23:01:32.884204    4740 command_runner.go:130] > Change: 2023-12-26 22:56:16.394000000 +0000
	I1226 23:01:32.884204    4740 command_runner.go:130] >  Birth: -
	I1226 23:01:32.884204    4740 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1226 23:01:32.884204    4740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1226 23:01:32.922461    4740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1226 23:01:33.335678    4740 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1226 23:01:33.336187    4740 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1226 23:01:33.336187    4740 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1226 23:01:33.336187    4740 command_runner.go:130] > daemonset.apps/kindnet configured
	I1226 23:01:33.337356    4740 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:01:33.338177    4740 kapi.go:59] client config for multinode-455300: &rest.Config{Host:"https://172.21.184.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 23:01:33.338991    4740 round_trippers.go:463] GET https://172.21.184.4:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1226 23:01:33.338991    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:33.338991    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:33.338991    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:33.353713    4740 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1226 23:01:33.354708    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:33.354708    4740 round_trippers.go:580]     Audit-Id: 6b489114-06d1-4dfe-a546-b5f84cc2f74d
	I1226 23:01:33.354761    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:33.354761    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:33.354761    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:33.354761    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:33.354820    4740 round_trippers.go:580]     Content-Length: 291
	I1226 23:01:33.354925    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:33 GMT
	I1226 23:01:33.355014    4740 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d040dd96-d104-4852-b930-38d82a1c4e71","resourceVersion":"456","creationTimestamp":"2023-12-26T22:58:16Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1226 23:01:33.355166    4740 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-455300" context rescaled to 1 replicas
	I1226 23:01:33.355247    4740 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.21.187.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1226 23:01:33.363372    4740 out.go:177] * Verifying Kubernetes components...
	I1226 23:01:33.383592    4740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 23:01:33.411408    4740 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:01:33.412397    4740 kapi.go:59] client config for multinode-455300: &rest.Config{Host:"https://172.21.184.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 23:01:33.413351    4740 node_ready.go:35] waiting up to 6m0s for node "multinode-455300-m02" to be "Ready" ...
	I1226 23:01:33.413598    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:33.413598    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:33.413598    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:33.413598    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:33.419040    4740 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:01:33.419040    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:33.419040    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:33.419040    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:33.419040    4740 round_trippers.go:580]     Content-Length: 4036
	I1226 23:01:33.419040    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:33 GMT
	I1226 23:01:33.419040    4740 round_trippers.go:580]     Audit-Id: 0268dd08-b12b-4c0e-b801-e129870caf56
	I1226 23:01:33.419040    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:33.419040    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:33.420052    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"615","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:met
adata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec" [truncated 3012 chars]
	I1226 23:01:33.925244    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:33.925244    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:33.925244    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:33.925244    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:33.930768    4740 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:01:33.930768    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:33.930768    4740 round_trippers.go:580]     Audit-Id: ac372529-9085-4486-abf9-6e7b0b81c3c4
	I1226 23:01:33.930768    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:33.930768    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:33.930768    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:33.930768    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:33.930907    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:33 GMT
	I1226 23:01:33.931071    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"618","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I1226 23:01:34.419948    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:34.420070    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:34.420070    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:34.420070    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:34.424412    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:34.424793    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:34.424793    4740 round_trippers.go:580]     Audit-Id: b3104efa-5f3c-4d84-9de6-ca09487e3181
	I1226 23:01:34.424793    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:34.424793    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:34.424793    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:34.424793    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:34.424793    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:34 GMT
	I1226 23:01:34.424975    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"618","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I1226 23:01:34.928759    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:34.928759    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:34.928759    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:34.928759    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:34.932639    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:34.932639    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:34.932639    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:34.932639    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:34.932639    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:34.932639    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:34.932639    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:34 GMT
	I1226 23:01:34.932639    4740 round_trippers.go:580]     Audit-Id: 918eabc4-7615-4e8d-8000-6b851f6037c7
	I1226 23:01:34.933106    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"618","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I1226 23:01:35.422295    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:35.422370    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:35.422370    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:35.422370    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:35.426752    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:35.426752    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:35.426752    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:35.426752    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:35.427256    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:35 GMT
	I1226 23:01:35.427256    4740 round_trippers.go:580]     Audit-Id: d7467cbb-9062-40b1-b934-7b513cff10a8
	I1226 23:01:35.427256    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:35.427256    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:35.427256    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"618","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I1226 23:01:35.427962    4740 node_ready.go:58] node "multinode-455300-m02" has status "Ready":"False"
	I1226 23:01:35.925155    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:35.925217    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:35.925217    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:35.925217    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:35.929673    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:35.929673    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:35.929673    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:35.929673    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:35 GMT
	I1226 23:01:35.929673    4740 round_trippers.go:580]     Audit-Id: 1c371bd8-2988-4af6-94b1-1c8fd21f5029
	I1226 23:01:35.929673    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:35.929673    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:35.929673    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:35.930931    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"618","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I1226 23:01:36.417769    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:36.417769    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:36.417769    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:36.417769    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:36.421755    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:36.421755    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:36.421755    4740 round_trippers.go:580]     Audit-Id: d212ab3a-678e-4cad-b765-00c30e8763c4
	I1226 23:01:36.421755    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:36.422674    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:36.422674    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:36.422674    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:36.422674    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:36 GMT
	I1226 23:01:36.423058    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"618","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I1226 23:01:36.925804    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:36.925890    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:36.925890    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:36.925890    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:36.929718    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:36.929718    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:36.929718    4740 round_trippers.go:580]     Audit-Id: 45e8a871-635d-47f6-a7b6-c1feadb393ba
	I1226 23:01:36.929718    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:36.929942    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:36.929942    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:36.929942    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:36.929942    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:36 GMT
	I1226 23:01:36.930158    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"618","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I1226 23:01:37.424810    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:37.424918    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:37.424918    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:37.425021    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:37.429290    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:37.429290    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:37.429290    4740 round_trippers.go:580]     Audit-Id: 663c4e5c-d816-4a4b-ba18-d2c8e6ee223a
	I1226 23:01:37.429720    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:37.429720    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:37.429720    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:37.429720    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:37.429805    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:37 GMT
	I1226 23:01:37.429805    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"618","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I1226 23:01:37.430355    4740 node_ready.go:58] node "multinode-455300-m02" has status "Ready":"False"
	I1226 23:01:37.924814    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:37.924814    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:37.924814    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:37.924814    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:37.928383    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:37.929399    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:37.929399    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:37 GMT
	I1226 23:01:37.929399    4740 round_trippers.go:580]     Audit-Id: 3c12ef5e-d589-4411-a517-32947c1bf63c
	I1226 23:01:37.929450    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:37.929450    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:37.929450    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:37.929537    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:37.930564    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"618","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I1226 23:01:38.429578    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:38.429636    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:38.429636    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:38.429636    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:38.433243    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:38.433243    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:38.433243    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:38 GMT
	I1226 23:01:38.433243    4740 round_trippers.go:580]     Audit-Id: fb136d89-6db3-414d-8a6a-db8b487c6ef6
	I1226 23:01:38.433243    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:38.433243    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:38.433243    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:38.433377    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:38.433793    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"618","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I1226 23:01:38.924260    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:38.924323    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:38.924323    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:38.924323    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:38.929985    4740 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:01:38.929985    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:38.929985    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:38.930140    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:38.930140    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:38 GMT
	I1226 23:01:38.930140    4740 round_trippers.go:580]     Audit-Id: 4fe5bc40-2a4a-48e0-9a91-f739fcc9db9d
	I1226 23:01:38.930140    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:38.930140    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:38.930215    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"618","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I1226 23:01:39.419033    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:39.419033    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:39.419097    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:39.419097    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:39.423556    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:39.423707    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:39.423707    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:39.423707    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:39 GMT
	I1226 23:01:39.423707    4740 round_trippers.go:580]     Audit-Id: 91f03f84-9cf3-4a6a-b373-b297fcbfa697
	I1226 23:01:39.423707    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:39.423786    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:39.423786    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:39.423912    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"618","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I1226 23:01:39.924236    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:39.924323    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:39.924323    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:39.924323    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:39.930211    4740 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:01:39.930211    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:39.930211    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:39.930211    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:39.930211    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:39 GMT
	I1226 23:01:39.930211    4740 round_trippers.go:580]     Audit-Id: 21401135-58a7-45fa-8267-bcf2b17e52df
	I1226 23:01:39.930211    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:39.930211    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:39.931536    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"618","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I1226 23:01:39.931711    4740 node_ready.go:58] node "multinode-455300-m02" has status "Ready":"False"
	I1226 23:01:40.421680    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:40.421810    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:40.421810    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:40.421810    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:40.426154    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:40.426154    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:40.426154    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:40.426154    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:40 GMT
	I1226 23:01:40.426154    4740 round_trippers.go:580]     Audit-Id: 54f7e5d4-bd3f-4598-8b4a-6d75ad751402
	I1226 23:01:40.426154    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:40.426154    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:40.426154    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:40.426154    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"618","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I1226 23:01:40.927966    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:40.927966    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:40.927966    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:40.927966    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:40.931565    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:40.931565    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:40.931565    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:40.931565    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:40.931565    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:40 GMT
	I1226 23:01:40.931565    4740 round_trippers.go:580]     Audit-Id: bf71c868-29d1-4286-a293-2a3b9e1ac60a
	I1226 23:01:40.931565    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:40.931565    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:40.932550    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"618","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I1226 23:01:41.420728    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:41.420728    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:41.420728    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:41.420728    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:41.424309    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:41.424309    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:41.425318    4740 round_trippers.go:580]     Audit-Id: a93eed48-9954-4ea7-8db4-2d071366a1d7
	I1226 23:01:41.425364    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:41.425364    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:41.425364    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:41.425402    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:41.425402    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:41 GMT
	I1226 23:01:41.425782    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"618","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I1226 23:01:41.926909    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:41.927227    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:41.927227    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:41.927227    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:42.020227    4740 round_trippers.go:574] Response Status: 200 OK in 92 milliseconds
	I1226 23:01:42.021175    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:42.021175    4740 round_trippers.go:580]     Audit-Id: 42f261b7-8f00-4f12-b16c-d5e577d79e60
	I1226 23:01:42.021175    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:42.021218    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:42.021218    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:42.021218    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:42.021218    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:42 GMT
	I1226 23:01:42.021477    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"618","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3121 chars]
	I1226 23:01:42.021477    4740 node_ready.go:58] node "multinode-455300-m02" has status "Ready":"False"
	I1226 23:01:42.427417    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:42.427417    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:42.427657    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:42.427657    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:42.619461    4740 round_trippers.go:574] Response Status: 200 OK in 191 milliseconds
	I1226 23:01:42.620140    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:42.620140    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:42.620140    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:42.620140    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:42 GMT
	I1226 23:01:42.620140    4740 round_trippers.go:580]     Audit-Id: 904ed9ee-89f2-4a8f-87ef-69412c4afffc
	I1226 23:01:42.620140    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:42.620140    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:42.620345    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:42.928788    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:42.928856    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:42.928856    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:42.928856    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:42.932204    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:42.932713    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:42.932713    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:42.932713    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:42.932713    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:42.932713    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:42.932713    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:42 GMT
	I1226 23:01:42.932802    4740 round_trippers.go:580]     Audit-Id: 9e1f06f3-17e7-4729-a935-19bc349156ae
	I1226 23:01:42.932995    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:43.421770    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:43.421770    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:43.421865    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:43.421865    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:43.426265    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:43.426265    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:43.426265    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:43.426265    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:43 GMT
	I1226 23:01:43.426265    4740 round_trippers.go:580]     Audit-Id: b88f8026-e543-4024-9e32-1987f080002c
	I1226 23:01:43.426265    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:43.426265    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:43.426265    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:43.426265    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:43.928085    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:43.928085    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:43.928085    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:43.928085    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:43.933066    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:43.933783    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:43.933783    4740 round_trippers.go:580]     Audit-Id: d0512b1f-d0cc-4945-9420-c33801ede901
	I1226 23:01:43.933783    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:43.933783    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:43.933783    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:43.933783    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:43.933783    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:43 GMT
	I1226 23:01:43.933783    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:44.420771    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:44.420771    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:44.420771    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:44.420771    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:44.426028    4740 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:01:44.426028    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:44.426318    4740 round_trippers.go:580]     Audit-Id: 5deb926a-7d72-471a-9cec-ab3524205107
	I1226 23:01:44.426318    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:44.426318    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:44.426318    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:44.426318    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:44.426318    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:44 GMT
	I1226 23:01:44.426758    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:44.427132    4740 node_ready.go:58] node "multinode-455300-m02" has status "Ready":"False"
	I1226 23:01:44.927628    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:44.927628    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:44.927628    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:44.927780    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:44.937007    4740 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1226 23:01:44.937007    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:44.937007    4740 round_trippers.go:580]     Audit-Id: f528d875-d143-4f2f-a244-8e9d390f2bcf
	I1226 23:01:44.937007    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:44.937007    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:44.937007    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:44.937007    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:44.937007    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:44 GMT
	I1226 23:01:44.937007    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:45.418999    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:45.419060    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:45.419060    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:45.419060    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:45.423438    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:45.423438    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:45.423438    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:45.423438    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:45.423438    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:45.423438    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:45 GMT
	I1226 23:01:45.423438    4740 round_trippers.go:580]     Audit-Id: 013306d0-451c-4e0b-845c-13ac48a138bb
	I1226 23:01:45.423438    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:45.423438    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:45.928133    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:45.928204    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:45.928204    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:45.928268    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:45.932006    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:45.932006    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:45.932006    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:45.932006    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:45.932006    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:45.932006    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:45 GMT
	I1226 23:01:45.933001    4740 round_trippers.go:580]     Audit-Id: 22fdf8ff-77d7-4035-864d-304616075e0d
	I1226 23:01:45.933001    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:45.933184    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:46.420039    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:46.420096    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:46.420096    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:46.420096    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:46.424411    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:46.424411    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:46.424411    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:46.424411    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:46 GMT
	I1226 23:01:46.424411    4740 round_trippers.go:580]     Audit-Id: 9c2ea983-7245-4b73-8ada-f9d1e1125cf0
	I1226 23:01:46.424411    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:46.424411    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:46.424411    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:46.425206    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:46.929518    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:46.929578    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:46.929578    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:46.929578    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:46.933987    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:46.933987    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:46.933987    4740 round_trippers.go:580]     Audit-Id: 8cb38d43-8a1f-4cfd-bb4a-d7e1c5e023ac
	I1226 23:01:46.933987    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:46.933987    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:46.933987    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:46.934474    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:46.934474    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:46 GMT
	I1226 23:01:46.934643    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:46.935178    4740 node_ready.go:58] node "multinode-455300-m02" has status "Ready":"False"
	I1226 23:01:47.418303    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:47.418303    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:47.418303    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:47.418303    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:47.617821    4740 round_trippers.go:574] Response Status: 200 OK in 199 milliseconds
	I1226 23:01:47.618419    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:47.618419    4740 round_trippers.go:580]     Audit-Id: 8cab8f6f-99af-4eda-9cf2-c0add27e0651
	I1226 23:01:47.618419    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:47.618419    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:47.618419    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:47.618419    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:47.618419    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:47 GMT
	I1226 23:01:47.618500    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:47.923432    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:47.923432    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:47.923540    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:47.923540    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:47.926723    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:47.926723    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:47.926723    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:47.926723    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:47 GMT
	I1226 23:01:47.926723    4740 round_trippers.go:580]     Audit-Id: 1cf0aa5d-5191-4896-b808-b457f396c6bd
	I1226 23:01:47.926723    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:47.927722    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:47.927722    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:47.927937    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:48.415616    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:48.415616    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:48.415616    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:48.415706    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:48.420464    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:48.420464    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:48.420830    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:48.420830    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:48.420830    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:48 GMT
	I1226 23:01:48.420830    4740 round_trippers.go:580]     Audit-Id: ecf99eae-f31f-4263-9729-61f48c2dcad8
	I1226 23:01:48.420905    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:48.420905    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:48.420905    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:48.928031    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:48.928075    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:48.928075    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:48.928218    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:48.931085    4740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:01:48.931287    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:48.931287    4740 round_trippers.go:580]     Audit-Id: 844c0169-d281-4807-a5f1-985280ff1ac1
	I1226 23:01:48.931287    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:48.931287    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:48.931287    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:48.931287    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:48.931393    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:48 GMT
	I1226 23:01:48.931586    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:49.426433    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:49.426522    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:49.426522    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:49.426522    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:49.430464    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:49.430689    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:49.430689    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:49 GMT
	I1226 23:01:49.430689    4740 round_trippers.go:580]     Audit-Id: 4977d131-ad83-4a84-b550-e864561b27c4
	I1226 23:01:49.430689    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:49.430689    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:49.430689    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:49.430689    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:49.430971    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:49.431410    4740 node_ready.go:58] node "multinode-455300-m02" has status "Ready":"False"
	I1226 23:01:49.928258    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:49.928258    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:49.928258    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:49.928258    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:49.931844    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:49.931844    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:49.932309    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:49.932309    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:49.932400    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:49.932400    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:49.932400    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:49 GMT
	I1226 23:01:49.932502    4740 round_trippers.go:580]     Audit-Id: 39bc7841-0895-4f21-bebd-238bdacf1fba
	I1226 23:01:49.932535    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:50.427203    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:50.427339    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:50.427339    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:50.427339    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:50.430735    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:50.431658    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:50.431658    4740 round_trippers.go:580]     Audit-Id: 285cf4e6-4e04-4798-a399-0d38b2f9ad9a
	I1226 23:01:50.431658    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:50.431658    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:50.431658    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:50.431658    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:50.431750    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:50 GMT
	I1226 23:01:50.432026    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:50.915193    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:50.915287    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:50.915355    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:50.915355    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:50.919092    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:50.919092    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:50.919092    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:50.919092    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:50.920150    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:50.920150    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:50.920150    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:50 GMT
	I1226 23:01:50.920150    4740 round_trippers.go:580]     Audit-Id: 0d851b64-355c-412c-b015-2832606926f1
	I1226 23:01:50.920150    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:51.417245    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:51.417245    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:51.417245    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:51.417354    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:51.421699    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:51.421699    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:51.421699    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:51.421699    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:51.421699    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:51.422113    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:51.422113    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:51 GMT
	I1226 23:01:51.422113    4740 round_trippers.go:580]     Audit-Id: 6299eec8-1de8-4fa2-a42b-e547469bb4bb
	I1226 23:01:51.422679    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:51.918690    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:51.918690    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:51.918895    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:51.918895    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:51.923340    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:51.923563    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:51.923563    4740 round_trippers.go:580]     Audit-Id: 30e96093-1904-41d3-80ed-e68e8d20858a
	I1226 23:01:51.923563    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:51.923563    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:51.923699    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:51.923699    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:51.923699    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:51 GMT
	I1226 23:01:51.923957    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:51.924615    4740 node_ready.go:58] node "multinode-455300-m02" has status "Ready":"False"
	I1226 23:01:52.417122    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:52.417122    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:52.417122    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:52.417122    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:52.422183    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:52.422183    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:52.422183    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:52 GMT
	I1226 23:01:52.422183    4740 round_trippers.go:580]     Audit-Id: 8596463d-2999-4522-923f-768164d88c72
	I1226 23:01:52.422183    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:52.422183    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:52.422183    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:52.422284    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:52.422383    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:52.921118    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:52.921231    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:52.921231    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:52.921231    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:52.924922    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:52.925556    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:52.925556    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:52.925556    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:52.925556    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:52 GMT
	I1226 23:01:52.925556    4740 round_trippers.go:580]     Audit-Id: 159e83c1-d482-472f-9aed-a750bd97f13d
	I1226 23:01:52.925556    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:52.925556    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:52.925909    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:53.421827    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:53.421827    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:53.421827    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:53.421827    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:53.425502    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:53.426079    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:53.426079    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:53.426079    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:53 GMT
	I1226 23:01:53.426079    4740 round_trippers.go:580]     Audit-Id: ca52bbbb-b4a4-4702-a988-596985a52fb4
	I1226 23:01:53.426183    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:53.426183    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:53.426183    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:53.426464    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:53.923880    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:53.923880    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:53.923880    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:53.923880    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:53.927448    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:53.927448    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:53.927448    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:53.927448    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:53.927448    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:53 GMT
	I1226 23:01:53.927448    4740 round_trippers.go:580]     Audit-Id: 81cb4cbf-80c7-4124-98e4-69cac3a1aa5e
	I1226 23:01:53.927448    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:53.927448    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:53.927448    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"630","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3390 chars]
	I1226 23:01:53.928456    4740 node_ready.go:58] node "multinode-455300-m02" has status "Ready":"False"
	I1226 23:01:54.425210    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:54.425636    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:54.425636    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:54.425636    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:54.430087    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:54.430186    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:54.430186    4740 round_trippers.go:580]     Audit-Id: 2b6861d2-1db4-4725-bbc2-beab2308bc1f
	I1226 23:01:54.430256    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:54.430321    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:54.430321    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:54.430321    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:54.430321    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:54 GMT
	I1226 23:01:54.430421    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"653","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3256 chars]
	I1226 23:01:54.430924    4740 node_ready.go:49] node "multinode-455300-m02" has status "Ready":"True"
	I1226 23:01:54.431033    4740 node_ready.go:38] duration metric: took 21.0175601s waiting for node "multinode-455300-m02" to be "Ready" ...
	I1226 23:01:54.431033    4740 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 23:01:54.431207    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods
	I1226 23:01:54.431270    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:54.431270    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:54.431270    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:54.439546    4740 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1226 23:01:54.439692    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:54.439692    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:54.439753    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:54.439753    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:54.439753    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:54.439820    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:54 GMT
	I1226 23:01:54.439820    4740 round_trippers.go:580]     Audit-Id: 54af3c36-765f-4f9b-b891-7b30b710f551
	I1226 23:01:54.441555    4740 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"653"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"451","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67434 chars]
	I1226 23:01:54.445867    4740 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace to be "Ready" ...
	I1226 23:01:54.446099    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:01:54.446099    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:54.446170    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:54.446170    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:54.448914    4740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:01:54.448914    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:54.448914    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:54.449754    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:54.449754    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:54.449875    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:54 GMT
	I1226 23:01:54.449875    4740 round_trippers.go:580]     Audit-Id: dd81b3c7-ab39-4b19-925a-65b5a031fbee
	I1226 23:01:54.449924    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:54.450108    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"451","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
	I1226 23:01:54.450677    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 23:01:54.450878    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:54.450878    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:54.450878    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:54.455697    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:54.455789    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:54.455789    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:54.455789    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:54.455789    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:54.455789    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:54 GMT
	I1226 23:01:54.455789    4740 round_trippers.go:580]     Audit-Id: 0fee39b4-cd56-4b67-8516-49bbe839c82f
	I1226 23:01:54.455878    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:54.456080    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"459","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I1226 23:01:54.456394    4740 pod_ready.go:92] pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace has status "Ready":"True"
	I1226 23:01:54.456394    4740 pod_ready.go:81] duration metric: took 10.5269ms waiting for pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace to be "Ready" ...
	I1226 23:01:54.456394    4740 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:01:54.456394    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-455300
	I1226 23:01:54.456394    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:54.456394    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:54.456394    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:54.460382    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:54.460382    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:54.460382    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:54 GMT
	I1226 23:01:54.460382    4740 round_trippers.go:580]     Audit-Id: 76b882ca-f5f5-4665-a181-526050dd97e8
	I1226 23:01:54.460382    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:54.460382    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:54.460382    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:54.460382    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:54.460382    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-455300","namespace":"kube-system","uid":"74a3baac-66f8-4934-bdb2-a8a34de26d03","resourceVersion":"412","creationTimestamp":"2023-12-26T22:58:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.21.184.4:2379","kubernetes.io/config.hash":"c67437441a51739d7438424fd3960b56","kubernetes.io/config.mirror":"c67437441a51739d7438424fd3960b56","kubernetes.io/config.seen":"2023-12-26T22:58:06.456133965Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5852 chars]
	I1226 23:01:54.460382    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 23:01:54.460382    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:54.460382    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:54.461281    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:54.464526    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:54.464910    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:54.464989    4740 round_trippers.go:580]     Audit-Id: e144f207-9743-4832-9834-4a571111ab0b
	I1226 23:01:54.464989    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:54.465086    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:54.465086    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:54.465086    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:54.465086    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:54 GMT
	I1226 23:01:54.465438    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"459","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I1226 23:01:54.465546    4740 pod_ready.go:92] pod "etcd-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:01:54.465546    4740 pod_ready.go:81] duration metric: took 9.1516ms waiting for pod "etcd-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:01:54.465546    4740 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:01:54.465546    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-455300
	I1226 23:01:54.465546    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:54.465546    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:54.465546    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:54.468457    4740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:01:54.468457    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:54.468457    4740 round_trippers.go:580]     Audit-Id: 67f3def8-f6b3-48a1-a153-4053a2cc5d22
	I1226 23:01:54.468457    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:54.469449    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:54.469449    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:54.469517    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:54.469517    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:54 GMT
	I1226 23:01:54.469982    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-455300","namespace":"kube-system","uid":"001f1489-e4c6-4a35-9c04-992ddd0eea29","resourceVersion":"413","creationTimestamp":"2023-12-26T22:58:17Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.21.184.4:8443","kubernetes.io/config.hash":"f2597de8fcd5ba36e5afbfdfbed4b155","kubernetes.io/config.mirror":"f2597de8fcd5ba36e5afbfdfbed4b155","kubernetes.io/config.seen":"2023-12-26T22:58:16.785839510Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7390 chars]
	I1226 23:01:54.470635    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 23:01:54.470722    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:54.470722    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:54.470722    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:54.475378    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:54.475378    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:54.475378    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:54 GMT
	I1226 23:01:54.475378    4740 round_trippers.go:580]     Audit-Id: 85e83b37-abe4-48ab-850f-243981715eb3
	I1226 23:01:54.475378    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:54.475378    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:54.475378    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:54.475378    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:54.475378    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"459","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I1226 23:01:54.476110    4740 pod_ready.go:92] pod "kube-apiserver-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:01:54.476110    4740 pod_ready.go:81] duration metric: took 10.5642ms waiting for pod "kube-apiserver-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:01:54.476110    4740 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:01:54.476110    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-455300
	I1226 23:01:54.476110    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:54.476110    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:54.476110    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:54.478740    4740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:01:54.479735    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:54.479735    4740 round_trippers.go:580]     Audit-Id: b746d44e-a510-4f12-a7fa-c9477d6dcbce
	I1226 23:01:54.479735    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:54.479735    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:54.479735    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:54.479735    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:54.479735    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:54 GMT
	I1226 23:01:54.479735    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-455300","namespace":"kube-system","uid":"fdaf236b-e792-4278-908c-34b337b97beb","resourceVersion":"410","creationTimestamp":"2023-12-26T22:58:13Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"390e7b7338e932006306396347e13bcf","kubernetes.io/config.mirror":"390e7b7338e932006306396347e13bcf","kubernetes.io/config.seen":"2023-12-26T22:58:06.456140564Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6965 chars]
	I1226 23:01:54.479735    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 23:01:54.479735    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:54.479735    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:54.479735    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:54.482689    4740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:01:54.483686    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:54.483686    4740 round_trippers.go:580]     Audit-Id: 52c9be93-20b7-4ede-b6dd-3dcb2abc0dcd
	I1226 23:01:54.483750    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:54.483804    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:54.483804    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:54.483804    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:54.483804    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:54 GMT
	I1226 23:01:54.483804    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"459","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I1226 23:01:54.484684    4740 pod_ready.go:92] pod "kube-controller-manager-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:01:54.484684    4740 pod_ready.go:81] duration metric: took 8.5741ms waiting for pod "kube-controller-manager-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:01:54.484684    4740 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqlf8" in "kube-system" namespace to be "Ready" ...
	I1226 23:01:54.627633    4740 request.go:629] Waited for 142.9491ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqlf8
	I1226 23:01:54.628004    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqlf8
	I1226 23:01:54.628004    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:54.628004    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:54.628051    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:54.632042    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:54.632445    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:54.632445    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:54.632445    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:54.632445    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:54.632445    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:54 GMT
	I1226 23:01:54.632445    4740 round_trippers.go:580]     Audit-Id: b6c102b5-27d5-4225-b214-42fa32342f08
	I1226 23:01:54.632445    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:54.632732    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bqlf8","generateName":"kube-proxy-","namespace":"kube-system","uid":"1caff24c-909f-42a9-a4b8-d9c8c1ec8828","resourceVersion":"635","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I1226 23:01:54.830676    4740 request.go:629] Waited for 196.8859ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:54.830885    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:01:54.830885    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:54.830885    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:54.830999    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:54.835910    4740 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:01:54.835910    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:54.835910    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:54.836091    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:54 GMT
	I1226 23:01:54.836091    4740 round_trippers.go:580]     Audit-Id: 8040fc5e-45d7-44ac-8d0b-0f3b9ecd6c05
	I1226 23:01:54.836091    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:54.836091    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:54.836091    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:54.836387    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"653","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_01_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3256 chars]
	I1226 23:01:54.836943    4740 pod_ready.go:92] pod "kube-proxy-bqlf8" in "kube-system" namespace has status "Ready":"True"
	I1226 23:01:54.836943    4740 pod_ready.go:81] duration metric: took 352.2593ms waiting for pod "kube-proxy-bqlf8" in "kube-system" namespace to be "Ready" ...
	I1226 23:01:54.836943    4740 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hzcqb" in "kube-system" namespace to be "Ready" ...
	I1226 23:01:55.034365    4740 request.go:629] Waited for 196.9851ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzcqb
	I1226 23:01:55.034607    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzcqb
	I1226 23:01:55.034607    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:55.034837    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:55.034909    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:55.039266    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:55.039266    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:55.039366    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:55 GMT
	I1226 23:01:55.039366    4740 round_trippers.go:580]     Audit-Id: 8acc337e-ad5d-4099-81fb-110901bb3e75
	I1226 23:01:55.039366    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:55.039366    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:55.039366    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:55.039432    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:55.040049    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hzcqb","generateName":"kube-proxy-","namespace":"kube-system","uid":"0027fd42-fa64-4d1d-acc8-36e7b41e4838","resourceVersion":"408","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I1226 23:01:55.236911    4740 request.go:629] Waited for 195.3812ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 23:01:55.237215    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 23:01:55.237215    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:55.237215    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:55.237215    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:55.240999    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:55.241624    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:55.241624    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:55.241624    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:55.241624    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:55.241624    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:55.241624    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:55 GMT
	I1226 23:01:55.241735    4740 round_trippers.go:580]     Audit-Id: 12a3adad-adca-4f80-a29e-f0a1ebf8308e
	I1226 23:01:55.241917    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"459","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I1226 23:01:55.242575    4740 pod_ready.go:92] pod "kube-proxy-hzcqb" in "kube-system" namespace has status "Ready":"True"
	I1226 23:01:55.242778    4740 pod_ready.go:81] duration metric: took 405.8347ms waiting for pod "kube-proxy-hzcqb" in "kube-system" namespace to be "Ready" ...
	I1226 23:01:55.242778    4740 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:01:55.440826    4740 request.go:629] Waited for 197.6555ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-455300
	I1226 23:01:55.441002    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-455300
	I1226 23:01:55.441002    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:55.441002    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:55.441002    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:55.447705    4740 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 23:01:55.447705    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:55.447705    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:55.447705    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:55.447705    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:55 GMT
	I1226 23:01:55.447800    4740 round_trippers.go:580]     Audit-Id: b027c10c-074b-46b9-be90-f4a60b0bf5ca
	I1226 23:01:55.447800    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:55.447800    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:55.447964    4740 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-455300","namespace":"kube-system","uid":"58252b0c-41ef-43ab-b2e8-4bd2a1b21cb1","resourceVersion":"411","creationTimestamp":"2023-12-26T22:58:17Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ebadb7d5fc522150603669ea98264147","kubernetes.io/config.mirror":"ebadb7d5fc522150603669ea98264147","kubernetes.io/config.seen":"2023-12-26T22:58:16.785831210Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4695 chars]
	I1226 23:01:55.628385    4740 request.go:629] Waited for 179.4565ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 23:01:55.628512    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes/multinode-455300
	I1226 23:01:55.628710    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:55.628710    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:55.628748    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:55.635686    4740 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 23:01:55.635686    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:55.635844    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:55.635844    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:55 GMT
	I1226 23:01:55.635844    4740 round_trippers.go:580]     Audit-Id: b094eba6-8493-4ded-adaf-88acc1352d54
	I1226 23:01:55.635844    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:55.635844    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:55.635895    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:55.635895    4740 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"459","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I1226 23:01:55.636575    4740 pod_ready.go:92] pod "kube-scheduler-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:01:55.636575    4740 pod_ready.go:81] duration metric: took 393.7971ms waiting for pod "kube-scheduler-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:01:55.636575    4740 pod_ready.go:38] duration metric: took 1.2055416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 23:01:55.636575    4740 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 23:01:55.652174    4740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 23:01:55.674575    4740 system_svc.go:56] duration metric: took 38ms WaitForService to wait for kubelet.
	I1226 23:01:55.674575    4740 kubeadm.go:581] duration metric: took 22.3192806s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 23:01:55.674752    4740 node_conditions.go:102] verifying NodePressure condition ...
	I1226 23:01:55.831843    4740 request.go:629] Waited for 156.8834ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.184.4:8443/api/v1/nodes
	I1226 23:01:55.831999    4740 round_trippers.go:463] GET https://172.21.184.4:8443/api/v1/nodes
	I1226 23:01:55.831999    4740 round_trippers.go:469] Request Headers:
	I1226 23:01:55.831999    4740 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:01:55.831999    4740 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:01:55.836814    4740 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:01:55.836920    4740 round_trippers.go:577] Response Headers:
	I1226 23:01:55.837146    4740 round_trippers.go:580]     Audit-Id: 8af3d88f-b3b6-402f-8760-47efcda1b052
	I1226 23:01:55.837174    4740 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:01:55.837174    4740 round_trippers.go:580]     Content-Type: application/json
	I1226 23:01:55.837174    4740 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:01:55.837174    4740 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:01:55.837174    4740 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:01:55 GMT
	I1226 23:01:55.837174    4740 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"654"},"items":[{"metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"459","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9258 chars]
	I1226 23:01:55.838665    4740 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:01:55.838828    4740 node_conditions.go:123] node cpu capacity is 2
	I1226 23:01:55.838828    4740 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:01:55.838900    4740 node_conditions.go:123] node cpu capacity is 2
	I1226 23:01:55.838900    4740 node_conditions.go:105] duration metric: took 164.1472ms to run NodePressure ...
	I1226 23:01:55.838900    4740 start.go:228] waiting for startup goroutines ...
	I1226 23:01:55.838900    4740 start.go:242] writing updated cluster config ...
	I1226 23:01:55.855287    4740 ssh_runner.go:195] Run: rm -f paused
	I1226 23:01:56.032885    4740 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1226 23:01:56.036994    4740 out.go:177] * Done! kubectl is now configured to use "multinode-455300" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Tue 2023-12-26 22:56:19 UTC, ends at Tue 2023-12-26 23:03:13 UTC. --
	Dec 26 22:58:44 multinode-455300 dockerd[1330]: time="2023-12-26T22:58:44.615122590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 22:58:44 multinode-455300 dockerd[1330]: time="2023-12-26T22:58:44.618058883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 26 22:58:44 multinode-455300 dockerd[1330]: time="2023-12-26T22:58:44.618240583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 22:58:44 multinode-455300 dockerd[1330]: time="2023-12-26T22:58:44.618372382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 26 22:58:44 multinode-455300 dockerd[1330]: time="2023-12-26T22:58:44.618922381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 22:58:45 multinode-455300 cri-dockerd[1215]: time="2023-12-26T22:58:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/94c58afb0b3a4a5fd58b878e5b43e8ce0f3fb4edd8737c20a859ff77239b181d/resolv.conf as [nameserver 172.21.176.1]"
	Dec 26 22:58:45 multinode-455300 cri-dockerd[1215]: time="2023-12-26T22:58:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/58a2f8149f7fdbaa003ad9b9551e6a0f5676cf97ccb1b00d081ecfc9662b31db/resolv.conf as [nameserver 172.21.176.1]"
	Dec 26 22:58:45 multinode-455300 dockerd[1330]: time="2023-12-26T22:58:45.465806452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 26 22:58:45 multinode-455300 dockerd[1330]: time="2023-12-26T22:58:45.465865252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 22:58:45 multinode-455300 dockerd[1330]: time="2023-12-26T22:58:45.465895152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 26 22:58:45 multinode-455300 dockerd[1330]: time="2023-12-26T22:58:45.465907652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 22:58:45 multinode-455300 dockerd[1330]: time="2023-12-26T22:58:45.576790019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 26 22:58:45 multinode-455300 dockerd[1330]: time="2023-12-26T22:58:45.576986319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 22:58:45 multinode-455300 dockerd[1330]: time="2023-12-26T22:58:45.577008319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 26 22:58:45 multinode-455300 dockerd[1330]: time="2023-12-26T22:58:45.577019419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 23:02:21 multinode-455300 dockerd[1330]: time="2023-12-26T23:02:21.604360176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 26 23:02:21 multinode-455300 dockerd[1330]: time="2023-12-26T23:02:21.604466980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 23:02:21 multinode-455300 dockerd[1330]: time="2023-12-26T23:02:21.604489380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 26 23:02:21 multinode-455300 dockerd[1330]: time="2023-12-26T23:02:21.604692387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 23:02:22 multinode-455300 cri-dockerd[1215]: time="2023-12-26T23:02:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/781e00b7789fd49c1b06f8e384b65e5098b862c639be7a908a5c72ce7140cf19/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 26 23:02:23 multinode-455300 cri-dockerd[1215]: time="2023-12-26T23:02:23Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Dec 26 23:02:23 multinode-455300 dockerd[1330]: time="2023-12-26T23:02:23.602822431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 26 23:02:23 multinode-455300 dockerd[1330]: time="2023-12-26T23:02:23.604080768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 23:02:23 multinode-455300 dockerd[1330]: time="2023-12-26T23:02:23.604124869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 26 23:02:23 multinode-455300 dockerd[1330]: time="2023-12-26T23:02:23.604944193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	26363c81c8c2e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   50 seconds ago      Running             busybox                   0                   781e00b7789fd       busybox-5bc68d56bd-flvvn
	5944000e150d4       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   0                   58a2f8149f7fd       coredns-5dd5756b68-fj9bd
	c49ce5a609883       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   94c58afb0b3a4       storage-provisioner
	5e6fbedb8b41b       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              4 minutes ago       Running             kindnet-cni               0                   6374d63f48806       kindnet-zxd45
	de1e7a6bed714       83f6cc407eed8                                                                                         4 minutes ago       Running             kube-proxy                0                   e74bc4380f45a       kube-proxy-hzcqb
	2c33bdd1003a5       73deb9a3f7025                                                                                         5 minutes ago       Running             etcd                      0                   d6f5bd631857d       etcd-multinode-455300
	239b6c40fa398       e3db313c6dbc0                                                                                         5 minutes ago       Running             kube-scheduler            0                   dd32942a97204       kube-scheduler-multinode-455300
	9a1fd87d0726d       d058aa5ab969c                                                                                         5 minutes ago       Running             kube-controller-manager   0                   2303b2b6305d3       kube-controller-manager-multinode-455300
	0d2ca397ea4bd       7fe0e6f37db33                                                                                         5 minutes ago       Running             kube-apiserver            0                   f18330f939ceb       kube-apiserver-multinode-455300
	
	
	==> coredns [5944000e150d] <==
	[INFO] 10.244.1.2:59670 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102203s
	[INFO] 10.244.0.3:47279 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000199705s
	[INFO] 10.244.0.3:46329 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000275807s
	[INFO] 10.244.0.3:51952 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000257006s
	[INFO] 10.244.0.3:39632 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116603s
	[INFO] 10.244.0.3:39823 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000069001s
	[INFO] 10.244.0.3:40379 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093902s
	[INFO] 10.244.0.3:36378 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086502s
	[INFO] 10.244.0.3:37142 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000179905s
	[INFO] 10.244.1.2:38866 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125403s
	[INFO] 10.244.1.2:55914 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000059302s
	[INFO] 10.244.1.2:34419 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086902s
	[INFO] 10.244.1.2:44856 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047402s
	[INFO] 10.244.0.3:33876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000471212s
	[INFO] 10.244.0.3:46526 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078902s
	[INFO] 10.244.0.3:55356 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000178604s
	[INFO] 10.244.0.3:54826 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.001129029s
	[INFO] 10.244.1.2:53436 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000271206s
	[INFO] 10.244.1.2:44799 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000369109s
	[INFO] 10.244.1.2:35728 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111303s
	[INFO] 10.244.1.2:56657 - 5 "PTR IN 1.176.21.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150804s
	[INFO] 10.244.0.3:58149 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261405s
	[INFO] 10.244.0.3:52594 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000382108s
	[INFO] 10.244.0.3:44384 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090701s
	[INFO] 10.244.0.3:46996 - 5 "PTR IN 1.176.21.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085502s
	
	
	==> describe nodes <==
	Name:               multinode-455300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-455300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=multinode-455300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_26T22_58_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Dec 2023 22:58:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-455300
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Dec 2023 23:03:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Dec 2023 23:02:52 +0000   Tue, 26 Dec 2023 22:58:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Dec 2023 23:02:52 +0000   Tue, 26 Dec 2023 22:58:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Dec 2023 23:02:52 +0000   Tue, 26 Dec 2023 22:58:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Dec 2023 23:02:52 +0000   Tue, 26 Dec 2023 22:58:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.21.184.4
	  Hostname:    multinode-455300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 f8a2e0911b574648a74b5f1cee182c8f
	  System UUID:                cabade69-24af-5b4b-90ee-9a5f4e38ee27
	  Boot ID:                    eb482b18-a118-4915-a33b-49e2d19d091f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-flvvn                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 coredns-5dd5756b68-fj9bd                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m44s
	  kube-system                 etcd-multinode-455300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m59s
	  kube-system                 kindnet-zxd45                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m44s
	  kube-system                 kube-apiserver-multinode-455300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-controller-manager-multinode-455300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-proxy-hzcqb                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-scheduler-multinode-455300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m42s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m7s (x8 over 5m7s)  kubelet          Node multinode-455300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m7s (x8 over 5m7s)  kubelet          Node multinode-455300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m7s (x7 over 5m7s)  kubelet          Node multinode-455300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m57s                kubelet          Node multinode-455300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m57s                kubelet          Node multinode-455300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m57s                kubelet          Node multinode-455300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m56s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m45s                node-controller  Node multinode-455300 event: Registered Node multinode-455300 in Controller
	  Normal  NodeReady                4m30s                kubelet          Node multinode-455300 status is now: NodeReady
	
	
	Name:               multinode-455300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-455300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=multinode-455300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_26T23_01_32_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Dec 2023 23:01:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-455300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Dec 2023 23:03:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Dec 2023 23:02:33 +0000   Tue, 26 Dec 2023 23:01:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Dec 2023 23:02:33 +0000   Tue, 26 Dec 2023 23:01:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Dec 2023 23:02:33 +0000   Tue, 26 Dec 2023 23:01:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Dec 2023 23:02:33 +0000   Tue, 26 Dec 2023 23:01:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.21.187.58
	  Hostname:    multinode-455300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 0bb1766e6ab3438a80fcdf60a63abd52
	  System UUID:                995771b5-3446-ed4e-9347-b1c6a8c42028
	  Boot ID:                    693d5539-f290-4cd5-a038-c00a11602cf2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-bskhd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kindnet-zt55b               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      101s
	  kube-system                 kube-proxy-bqlf8            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  101s (x5 over 103s)  kubelet          Node multinode-455300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s (x5 over 103s)  kubelet          Node multinode-455300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s (x5 over 103s)  kubelet          Node multinode-455300-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           100s                 node-controller  Node multinode-455300-m02 event: Registered Node multinode-455300-m02 in Controller
	  Normal  NodeReady                79s                  kubelet          Node multinode-455300-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.326219] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.064259] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.193458] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +8.058482] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec26 22:57] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.152354] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[ +30.290921] systemd-fstab-generator[938]: Ignoring "noauto" for root device
	[  +0.586683] systemd-fstab-generator[978]: Ignoring "noauto" for root device
	[  +0.172803] systemd-fstab-generator[989]: Ignoring "noauto" for root device
	[  +0.196659] systemd-fstab-generator[1002]: Ignoring "noauto" for root device
	[  +1.355708] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.417193] systemd-fstab-generator[1160]: Ignoring "noauto" for root device
	[  +0.173607] systemd-fstab-generator[1171]: Ignoring "noauto" for root device
	[  +0.171643] systemd-fstab-generator[1182]: Ignoring "noauto" for root device
	[  +0.165607] systemd-fstab-generator[1193]: Ignoring "noauto" for root device
	[  +0.200592] systemd-fstab-generator[1207]: Ignoring "noauto" for root device
	[ +12.718927] systemd-fstab-generator[1315]: Ignoring "noauto" for root device
	[  +2.575849] kauditd_printk_skb: 29 callbacks suppressed
	[Dec26 22:58] systemd-fstab-generator[1694]: Ignoring "noauto" for root device
	[  +0.584944] kauditd_printk_skb: 29 callbacks suppressed
	[  +9.832395] systemd-fstab-generator[2687]: Ignoring "noauto" for root device
	[ +26.141654] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2c33bdd1003a] <==
	{"level":"info","ts":"2023-12-26T22:58:10.012303Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-26T22:58:10.010824Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-26T22:58:10.016182Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.21.184.4:2379"}
	{"level":"info","ts":"2023-12-26T22:58:10.016645Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-26T22:58:10.016717Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-26T22:58:10.02086Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"af5a9070d9ef2513","local-member-id":"897479b6d7267bc0","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-26T22:58:10.027693Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-26T22:58:10.027883Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-26T22:58:38.009319Z","caller":"traceutil/trace.go:171","msg":"trace[2029258987] transaction","detail":"{read_only:false; response_revision:416; number_of_response:1; }","duration":"145.339191ms","start":"2023-12-26T22:58:37.863964Z","end":"2023-12-26T22:58:38.009303Z","steps":["trace[2029258987] 'process raft request'  (duration: 135.773123ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T22:58:41.350162Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.339772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-455300\" ","response":"range_response_count:1 size:4485"}
	{"level":"warn","ts":"2023-12-26T22:58:41.350239Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.268976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-26T22:58:41.350255Z","caller":"traceutil/trace.go:171","msg":"trace[513295001] range","detail":"{range_begin:/registry/minions/multinode-455300; range_end:; response_count:1; response_revision:424; }","duration":"194.486572ms","start":"2023-12-26T22:58:41.155754Z","end":"2023-12-26T22:58:41.35024Z","steps":["trace[513295001] 'range keys from in-memory index tree'  (duration: 194.225873ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-26T22:58:41.350277Z","caller":"traceutil/trace.go:171","msg":"trace[1571719067] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:424; }","duration":"193.311476ms","start":"2023-12-26T22:58:41.156956Z","end":"2023-12-26T22:58:41.350267Z","steps":["trace[1571719067] 'range keys from in-memory index tree'  (duration: 193.090876ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-26T22:58:57.952994Z","caller":"traceutil/trace.go:171","msg":"trace[2038091053] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"135.857069ms","start":"2023-12-26T22:58:57.817116Z","end":"2023-12-26T22:58:57.952973Z","steps":["trace[2038091053] 'process raft request'  (duration: 135.524769ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-26T23:01:25.259487Z","caller":"traceutil/trace.go:171","msg":"trace[1896540201] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"111.538832ms","start":"2023-12-26T23:01:25.147918Z","end":"2023-12-26T23:01:25.259457Z","steps":["trace[1896540201] 'process raft request'  (duration: 105.34163ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-26T23:01:42.46088Z","caller":"traceutil/trace.go:171","msg":"trace[2145073701] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"187.290771ms","start":"2023-12-26T23:01:42.273571Z","end":"2023-12-26T23:01:42.460862Z","steps":["trace[2145073701] 'process raft request'  (duration: 187.021755ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-26T23:01:42.605388Z","caller":"traceutil/trace.go:171","msg":"trace[714241624] linearizableReadLoop","detail":"{readStateIndex:683; appliedIndex:681; }","duration":"184.872624ms","start":"2023-12-26T23:01:42.420496Z","end":"2023-12-26T23:01:42.605368Z","steps":["trace[714241624] 'read index received'  (duration: 40.18644ms)","trace[714241624] 'applied index is now lower than readState.Index'  (duration: 144.685484ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-26T23:01:42.605909Z","caller":"traceutil/trace.go:171","msg":"trace[1904853191] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"245.439501ms","start":"2023-12-26T23:01:42.360456Z","end":"2023-12-26T23:01:42.605895Z","steps":["trace[1904853191] 'process raft request'  (duration: 240.265387ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T23:01:42.607019Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.14984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-455300-m02\" ","response":"range_response_count:1 size:3141"}
	{"level":"info","ts":"2023-12-26T23:01:42.607084Z","caller":"traceutil/trace.go:171","msg":"trace[1378497642] range","detail":"{range_begin:/registry/minions/multinode-455300-m02; range_end:; response_count:1; response_revision:630; }","duration":"186.601928ms","start":"2023-12-26T23:01:42.420469Z","end":"2023-12-26T23:01:42.607071Z","steps":["trace[1378497642] 'agreement among raft nodes before linearized reading'  (duration: 185.004932ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T23:01:47.606025Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.571873ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-455300-m02\" ","response":"range_response_count:1 size:3141"}
	{"level":"info","ts":"2023-12-26T23:01:47.606904Z","caller":"traceutil/trace.go:171","msg":"trace[2031689546] range","detail":"{range_begin:/registry/minions/multinode-455300-m02; range_end:; response_count:1; response_revision:640; }","duration":"192.458222ms","start":"2023-12-26T23:01:47.414428Z","end":"2023-12-26T23:01:47.606886Z","steps":["trace[2031689546] 'range keys from in-memory index tree'  (duration: 191.404063ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T23:01:47.606041Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.764102ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-26T23:01:47.607465Z","caller":"traceutil/trace.go:171","msg":"trace[2104144429] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:640; }","duration":"240.191582ms","start":"2023-12-26T23:01:47.367265Z","end":"2023-12-26T23:01:47.607456Z","steps":["trace[2104144429] 'range keys from in-memory index tree'  (duration: 238.594893ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-26T23:01:47.802448Z","caller":"traceutil/trace.go:171","msg":"trace[190695714] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"140.615934ms","start":"2023-12-26T23:01:47.66181Z","end":"2023-12-26T23:01:47.802426Z","steps":["trace[190695714] 'process raft request'  (duration: 140.267514ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:03:13 up 7 min,  0 users,  load average: 0.42, 0.46, 0.23
	Linux multinode-455300 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [5e6fbedb8b41] <==
	I1226 23:02:12.822875       1 main.go:250] Node multinode-455300-m02 has CIDR [10.244.1.0/24] 
	I1226 23:02:22.835178       1 main.go:223] Handling node with IPs: map[172.21.184.4:{}]
	I1226 23:02:22.835208       1 main.go:227] handling current node
	I1226 23:02:22.835220       1 main.go:223] Handling node with IPs: map[172.21.187.58:{}]
	I1226 23:02:22.835227       1 main.go:250] Node multinode-455300-m02 has CIDR [10.244.1.0/24] 
	I1226 23:02:32.841859       1 main.go:223] Handling node with IPs: map[172.21.184.4:{}]
	I1226 23:02:32.841908       1 main.go:227] handling current node
	I1226 23:02:32.841921       1 main.go:223] Handling node with IPs: map[172.21.187.58:{}]
	I1226 23:02:32.841927       1 main.go:250] Node multinode-455300-m02 has CIDR [10.244.1.0/24] 
	I1226 23:02:42.849120       1 main.go:223] Handling node with IPs: map[172.21.184.4:{}]
	I1226 23:02:42.849219       1 main.go:227] handling current node
	I1226 23:02:42.849235       1 main.go:223] Handling node with IPs: map[172.21.187.58:{}]
	I1226 23:02:42.849243       1 main.go:250] Node multinode-455300-m02 has CIDR [10.244.1.0/24] 
	I1226 23:02:52.858315       1 main.go:223] Handling node with IPs: map[172.21.184.4:{}]
	I1226 23:02:52.858435       1 main.go:227] handling current node
	I1226 23:02:52.858466       1 main.go:223] Handling node with IPs: map[172.21.187.58:{}]
	I1226 23:02:52.858475       1 main.go:250] Node multinode-455300-m02 has CIDR [10.244.1.0/24] 
	I1226 23:03:02.872620       1 main.go:223] Handling node with IPs: map[172.21.184.4:{}]
	I1226 23:03:02.872663       1 main.go:227] handling current node
	I1226 23:03:02.872676       1 main.go:223] Handling node with IPs: map[172.21.187.58:{}]
	I1226 23:03:02.872683       1 main.go:250] Node multinode-455300-m02 has CIDR [10.244.1.0/24] 
	I1226 23:03:12.886838       1 main.go:223] Handling node with IPs: map[172.21.184.4:{}]
	I1226 23:03:12.886884       1 main.go:227] handling current node
	I1226 23:03:12.886897       1 main.go:223] Handling node with IPs: map[172.21.187.58:{}]
	I1226 23:03:12.886904       1 main.go:250] Node multinode-455300-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0d2ca397ea4b] <==
	I1226 22:58:12.500479       1 cache.go:39] Caches are synced for autoregister controller
	I1226 22:58:12.541075       1 shared_informer.go:318] Caches are synced for configmaps
	I1226 22:58:12.541568       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1226 22:58:12.543964       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1226 22:58:12.544936       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1226 22:58:12.544954       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1226 22:58:12.545222       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1226 22:58:12.548561       1 controller.go:624] quota admission added evaluator for: namespaces
	E1226 22:58:12.600836       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1226 22:58:12.809933       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1226 22:58:13.364101       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1226 22:58:13.374168       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1226 22:58:13.374184       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1226 22:58:14.700740       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1226 22:58:14.789897       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1226 22:58:14.957254       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1226 22:58:14.970741       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.21.184.4]
	I1226 22:58:14.971782       1 controller.go:624] quota admission added evaluator for: endpoints
	I1226 22:58:14.988816       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1226 22:58:15.441837       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1226 22:58:16.537467       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1226 22:58:16.563640       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1226 22:58:16.593420       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1226 22:58:29.438489       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1226 22:58:29.588994       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [9a1fd87d0726] <==
	I1226 22:58:43.973870       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.2µs"
	I1226 22:58:44.008003       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.499µs"
	I1226 22:58:46.356558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.7µs"
	I1226 22:58:46.402053       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.900363ms"
	I1226 22:58:46.402416       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.4µs"
	I1226 22:58:48.691063       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1226 23:01:32.226604       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-455300-m02\" does not exist"
	I1226 23:01:32.255494       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-455300-m02" podCIDRs=["10.244.1.0/24"]
	I1226 23:01:32.271862       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bqlf8"
	I1226 23:01:32.277875       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zt55b"
	I1226 23:01:33.725610       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-455300-m02"
	I1226 23:01:33.725826       1 event.go:307] "Event occurred" object="multinode-455300-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-455300-m02 event: Registered Node multinode-455300-m02 in Controller"
	I1226 23:01:54.068325       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-455300-m02"
	I1226 23:02:20.952593       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1226 23:02:20.982276       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-bskhd"
	I1226 23:02:21.008155       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-flvvn"
	I1226 23:02:21.053900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="101.738222ms"
	I1226 23:02:21.091416       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="37.47224ms"
	I1226 23:02:21.091483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="33.901µs"
	I1226 23:02:21.111188       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="280.208µs"
	I1226 23:02:21.128941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="74.502µs"
	I1226 23:02:24.191648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="13.250081ms"
	I1226 23:02:24.192309       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="32.801µs"
	I1226 23:02:24.485337       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="28.424118ms"
	I1226 23:02:24.486046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="112.004µs"
	
	
	==> kube-proxy [de1e7a6bed71] <==
	I1226 22:58:30.910121       1 server_others.go:69] "Using iptables proxy"
	I1226 22:58:30.925166       1 node.go:141] Successfully retrieved node IP: 172.21.184.4
	I1226 22:58:30.980870       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1226 22:58:30.981024       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1226 22:58:30.985760       1 server_others.go:152] "Using iptables Proxier"
	I1226 22:58:30.986256       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1226 22:58:30.987130       1 server.go:846] "Version info" version="v1.28.4"
	I1226 22:58:30.987357       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1226 22:58:30.989326       1 config.go:188] "Starting service config controller"
	I1226 22:58:30.989433       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1226 22:58:30.989835       1 config.go:97] "Starting endpoint slice config controller"
	I1226 22:58:30.989865       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1226 22:58:30.993636       1 config.go:315] "Starting node config controller"
	I1226 22:58:30.993653       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1226 22:58:31.090153       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1226 22:58:31.090220       1 shared_informer.go:318] Caches are synced for service config
	I1226 22:58:31.094863       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [239b6c40fa39] <==
	W1226 22:58:13.567937       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1226 22:58:13.568136       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1226 22:58:13.621404       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1226 22:58:13.621495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1226 22:58:13.720559       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1226 22:58:13.720681       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1226 22:58:13.761277       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1226 22:58:13.761414       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1226 22:58:13.814126       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1226 22:58:13.814406       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1226 22:58:13.815013       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1226 22:58:13.815313       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1226 22:58:13.913876       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1226 22:58:13.913913       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1226 22:58:13.947103       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1226 22:58:13.947256       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1226 22:58:13.973770       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1226 22:58:13.973856       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1226 22:58:13.988228       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1226 22:58:13.988370       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1226 22:58:14.058498       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1226 22:58:14.058642       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1226 22:58:14.126846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1226 22:58:14.126942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1226 22:58:16.909733       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-26 22:56:19 UTC, ends at Tue 2023-12-26 23:03:13 UTC. --
	Dec 26 22:58:44 multinode-455300 kubelet[2710]: I1226 22:58:44.066728    2710 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jccz\" (UniqueName: \"kubernetes.io/projected/e274f19d-1940-400d-b887-aaf390e64fdd-kube-api-access-4jccz\") pod \"storage-provisioner\" (UID: \"e274f19d-1940-400d-b887-aaf390e64fdd\") " pod="kube-system/storage-provisioner"
	Dec 26 22:58:44 multinode-455300 kubelet[2710]: I1226 22:58:44.066781    2710 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxt67\" (UniqueName: \"kubernetes.io/projected/fbc5229e-2af2-4e17-b23c-ebf836a42aa2-kube-api-access-jxt67\") pod \"coredns-5dd5756b68-fj9bd\" (UID: \"fbc5229e-2af2-4e17-b23c-ebf836a42aa2\") " pod="kube-system/coredns-5dd5756b68-fj9bd"
	Dec 26 22:58:44 multinode-455300 kubelet[2710]: I1226 22:58:44.066807    2710 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e274f19d-1940-400d-b887-aaf390e64fdd-tmp\") pod \"storage-provisioner\" (UID: \"e274f19d-1940-400d-b887-aaf390e64fdd\") " pod="kube-system/storage-provisioner"
	Dec 26 22:58:45 multinode-455300 kubelet[2710]: I1226 22:58:45.305837    2710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58a2f8149f7fdbaa003ad9b9551e6a0f5676cf97ccb1b00d081ecfc9662b31db"
	Dec 26 22:58:46 multinode-455300 kubelet[2710]: I1226 22:58:46.387265    2710 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fj9bd" podStartSLOduration=17.387219171 podCreationTimestamp="2023-12-26 22:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-26 22:58:46.356607131 +0000 UTC m=+29.861876255" watchObservedRunningTime="2023-12-26 22:58:46.387219171 +0000 UTC m=+29.892488295"
	Dec 26 22:59:16 multinode-455300 kubelet[2710]: E1226 22:59:16.935139    2710 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 26 22:59:16 multinode-455300 kubelet[2710]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 26 22:59:16 multinode-455300 kubelet[2710]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 26 22:59:16 multinode-455300 kubelet[2710]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 26 23:00:16 multinode-455300 kubelet[2710]: E1226 23:00:16.934725    2710 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 26 23:00:16 multinode-455300 kubelet[2710]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 26 23:00:16 multinode-455300 kubelet[2710]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 26 23:00:16 multinode-455300 kubelet[2710]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 26 23:01:16 multinode-455300 kubelet[2710]: E1226 23:01:16.934710    2710 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 26 23:01:16 multinode-455300 kubelet[2710]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 26 23:01:16 multinode-455300 kubelet[2710]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 26 23:01:16 multinode-455300 kubelet[2710]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 26 23:02:16 multinode-455300 kubelet[2710]: E1226 23:02:16.936222    2710 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 26 23:02:16 multinode-455300 kubelet[2710]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 26 23:02:16 multinode-455300 kubelet[2710]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 26 23:02:16 multinode-455300 kubelet[2710]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 26 23:02:21 multinode-455300 kubelet[2710]: I1226 23:02:21.045699    2710 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=223.045651185 podCreationTimestamp="2023-12-26 22:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-26 22:58:46.44339096 +0000 UTC m=+29.948660084" watchObservedRunningTime="2023-12-26 23:02:21.045651185 +0000 UTC m=+244.550920309"
	Dec 26 23:02:21 multinode-455300 kubelet[2710]: I1226 23:02:21.046155    2710 topology_manager.go:215] "Topology Admit Handler" podUID="39d2290f-2a6b-4976-867a-9170ff6a140d" podNamespace="default" podName="busybox-5bc68d56bd-flvvn"
	Dec 26 23:02:21 multinode-455300 kubelet[2710]: I1226 23:02:21.148342    2710 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5g44\" (UniqueName: \"kubernetes.io/projected/39d2290f-2a6b-4976-867a-9170ff6a140d-kube-api-access-n5g44\") pod \"busybox-5bc68d56bd-flvvn\" (UID: \"39d2290f-2a6b-4976-867a-9170ff6a140d\") " pod="default/busybox-5bc68d56bd-flvvn"
	Dec 26 23:02:42 multinode-455300 kubelet[2710]: E1226 23:02:42.361005    2710 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59588->127.0.0.1:45251: write tcp 127.0.0.1:59588->127.0.0.1:45251: write: broken pipe
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 23:03:05.353904    6156 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-455300 -n multinode-455300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-455300 -n multinode-455300: (12.2685511s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-455300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (57.56s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (502.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-455300
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-455300
E1226 23:18:36.117521   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
multinode_test.go:318: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-455300: (1m23.2355935s)
multinode_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-455300 --wait=true -v=8 --alsologtostderr
E1226 23:19:01.503716   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 23:21:05.426535   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 23:21:39.376219   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 23:23:36.127914   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 23:24:01.500503   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
multinode_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-455300 --wait=true -v=8 --alsologtostderr: exit status 1 (6m21.0658914s)

                                                
                                                
-- stdout --
	* [multinode-455300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node multinode-455300 in cluster multinode-455300
	* Restarting existing hyperv VM for "multinode-455300" ...
	* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Starting worker node multinode-455300-m02 in cluster multinode-455300
	* Restarting existing hyperv VM for "multinode-455300-m02" ...
	* Found network options:
	  - NO_PROXY=172.21.182.57
	  - NO_PROXY=172.21.182.57
	* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	  - env NO_PROXY=172.21.182.57
	* Verifying Kubernetes components...
	* Starting worker node multinode-455300-m03 in cluster multinode-455300
	* Restarting existing hyperv VM for "multinode-455300-m03" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 23:18:53.277534   14940 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1226 23:18:53.346533   14940 out.go:296] Setting OutFile to fd 1300 ...
	I1226 23:18:53.347534   14940 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 23:18:53.347534   14940 out.go:309] Setting ErrFile to fd 1040...
	I1226 23:18:53.347534   14940 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 23:18:53.369537   14940 out.go:303] Setting JSON to false
	I1226 23:18:53.373524   14940 start.go:128] hostinfo: {"hostname":"minikube1","uptime":7132,"bootTime":1703625601,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1226 23:18:53.373524   14940 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 23:18:53.378526   14940 out.go:177] * [multinode-455300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1226 23:18:53.382543   14940 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:18:53.382543   14940 notify.go:220] Checking for updates...
	I1226 23:18:53.384533   14940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 23:18:53.387532   14940 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1226 23:18:53.390533   14940 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 23:18:53.393534   14940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 23:18:53.396534   14940 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:18:53.396534   14940 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 23:18:58.744736   14940 out.go:177] * Using the hyperv driver based on existing profile
	I1226 23:18:58.748857   14940 start.go:298] selected driver: hyperv
	I1226 23:18:58.749005   14940 start.go:902] validating driver "hyperv" against &{Name:multinode-455300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-455300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.21.184.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.21.187.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.21.188.21 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 23:18:58.749005   14940 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 23:18:58.796052   14940 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 23:18:58.796052   14940 cni.go:84] Creating CNI manager for ""
	I1226 23:18:58.796052   14940 cni.go:136] 3 nodes found, recommending kindnet
	I1226 23:18:58.796052   14940 start_flags.go:323] config:
	{Name:multinode-455300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-455300 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.21.184.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.21.187.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.21.188.21 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 23:18:58.796664   14940 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:18:58.801612   14940 out.go:177] * Starting control plane node multinode-455300 in cluster multinode-455300
	I1226 23:18:58.803805   14940 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 23:18:58.803805   14940 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1226 23:18:58.803805   14940 cache.go:56] Caching tarball of preloaded images
	I1226 23:18:58.804465   14940 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1226 23:18:58.804465   14940 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1226 23:18:58.804465   14940 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 23:18:58.806611   14940 start.go:365] acquiring machines lock for multinode-455300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1226 23:18:58.807196   14940 start.go:369] acquired machines lock for "multinode-455300" in 584.5µs
	I1226 23:18:58.807196   14940 start.go:96] Skipping create...Using existing machine configuration
	I1226 23:18:58.807196   14940 fix.go:54] fixHost starting: 
	I1226 23:18:58.807941   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:01.488243   14940 main.go:141] libmachine: [stdout =====>] : Off
	
	I1226 23:19:01.488243   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:01.489344   14940 fix.go:102] recreateIfNeeded on multinode-455300: state=Stopped err=<nil>
	W1226 23:19:01.489344   14940 fix.go:128] unexpected machine state, will restart: <nil>
	I1226 23:19:01.492322   14940 out.go:177] * Restarting existing hyperv VM for "multinode-455300" ...
	I1226 23:19:01.495927   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-455300
	I1226 23:19:04.532264   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:19:04.532486   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:04.532486   14940 main.go:141] libmachine: Waiting for host to start...
	I1226 23:19:04.532565   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:06.802124   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:06.802306   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:06.802408   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:09.327795   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:19:09.327931   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:10.328604   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:12.554728   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:12.554966   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:12.554966   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:15.131876   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:19:15.132060   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:16.134733   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:18.358398   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:18.358398   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:18.358398   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:20.934107   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:19:20.934167   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:21.947838   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:24.189402   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:24.189402   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:24.189402   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:26.749185   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:19:26.749261   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:27.765373   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:30.031669   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:30.031669   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:30.031896   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:32.664464   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:19:32.664464   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:32.666910   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:34.848664   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:34.848664   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:34.848664   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:37.481469   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:19:37.481469   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:37.481469   14940 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 23:19:37.484746   14940 machine.go:88] provisioning docker machine ...
	I1226 23:19:37.484827   14940 buildroot.go:166] provisioning hostname "multinode-455300"
	I1226 23:19:37.484943   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:39.629726   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:39.629936   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:39.630027   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:42.214437   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:19:42.214437   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:42.221897   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:19:42.222713   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.57 22 <nil> <nil>}
	I1226 23:19:42.222713   14940 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-455300 && echo "multinode-455300" | sudo tee /etc/hostname
	I1226 23:19:42.400370   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-455300
	
	I1226 23:19:42.400910   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:44.562322   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:44.562512   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:44.562512   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:47.131604   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:19:47.131604   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:47.137123   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:19:47.137952   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.57 22 <nil> <nil>}
	I1226 23:19:47.137952   14940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-455300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-455300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-455300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 23:19:47.309743   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 23:19:47.309743   14940 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1226 23:19:47.309743   14940 buildroot.go:174] setting up certificates
	I1226 23:19:47.309743   14940 provision.go:83] configureAuth start
	I1226 23:19:47.309743   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:49.393760   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:49.393760   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:49.393846   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:51.939473   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:19:51.939473   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:51.939574   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:54.064322   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:54.064322   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:54.064322   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:56.612771   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:19:56.613102   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:56.613287   14940 provision.go:138] copyHostCerts
	I1226 23:19:56.613287   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1226 23:19:56.613287   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1226 23:19:56.613852   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1226 23:19:56.614347   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1226 23:19:56.615186   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1226 23:19:56.615186   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1226 23:19:56.615186   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1226 23:19:56.616091   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1226 23:19:56.617648   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1226 23:19:56.617816   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1226 23:19:56.617816   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1226 23:19:56.618353   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I1226 23:19:56.619306   14940 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-455300 san=[172.21.182.57 172.21.182.57 localhost 127.0.0.1 minikube multinode-455300]
	I1226 23:19:56.841336   14940 provision.go:172] copyRemoteCerts
	I1226 23:19:56.852386   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 23:19:56.853421   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:59.038515   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:59.038515   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:59.038629   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:01.660897   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:01.660897   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:01.661425   14940 sshutil.go:53] new ssh client: &{IP:172.21.182.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 23:20:01.787347   14940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9348692s)
	I1226 23:20:01.787347   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1226 23:20:01.787347   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 23:20:01.828830   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1226 23:20:01.829541   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I1226 23:20:01.875390   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1226 23:20:01.875921   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1226 23:20:01.917952   14940 provision.go:86] duration metric: configureAuth took 14.6081693s
	I1226 23:20:01.917996   14940 buildroot.go:189] setting minikube options for container-runtime
	I1226 23:20:01.918357   14940 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:20:01.918357   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:04.086062   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:04.086296   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:04.086393   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:06.724968   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:06.724968   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:06.731807   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:20:06.732515   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.57 22 <nil> <nil>}
	I1226 23:20:06.732515   14940 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1226 23:20:06.890061   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1226 23:20:06.890061   14940 buildroot.go:70] root file system type: tmpfs
	I1226 23:20:06.890356   14940 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1226 23:20:06.890470   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:09.059708   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:09.059708   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:09.059917   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:11.717054   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:11.717054   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:11.722954   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:20:11.723678   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.57 22 <nil> <nil>}
	I1226 23:20:11.723678   14940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1226 23:20:11.919192   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1226 23:20:11.919275   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:14.094356   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:14.094356   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:14.094647   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:16.729840   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:16.730179   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:16.736043   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:20:16.736857   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.57 22 <nil> <nil>}
	I1226 23:20:16.736857   14940 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1226 23:20:18.177192   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1226 23:20:18.177192   14940 machine.go:91] provisioned docker machine in 40.6923732s
	I1226 23:20:18.177192   14940 start.go:300] post-start starting for "multinode-455300" (driver="hyperv")
	I1226 23:20:18.177192   14940 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 23:20:18.195070   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 23:20:18.195070   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:20.405793   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:20.406089   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:20.406089   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:23.064278   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:23.064413   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:23.064592   14940 sshutil.go:53] new ssh client: &{IP:172.21.182.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 23:20:23.191586   14940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9964506s)
	I1226 23:20:23.206181   14940 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 23:20:23.212464   14940 command_runner.go:130] > NAME=Buildroot
	I1226 23:20:23.212605   14940 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I1226 23:20:23.212605   14940 command_runner.go:130] > ID=buildroot
	I1226 23:20:23.212605   14940 command_runner.go:130] > VERSION_ID=2021.02.12
	I1226 23:20:23.212753   14940 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1226 23:20:23.212753   14940 info.go:137] Remote host: Buildroot 2021.02.12
	I1226 23:20:23.212861   14940 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1226 23:20:23.213428   14940 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1226 23:20:23.214577   14940 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> 107282.pem in /etc/ssl/certs
	I1226 23:20:23.214577   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> /etc/ssl/certs/107282.pem
	I1226 23:20:23.228436   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 23:20:23.245542   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /etc/ssl/certs/107282.pem (1708 bytes)
	I1226 23:20:23.287478   14940 start.go:303] post-start completed in 5.1102875s
	I1226 23:20:23.287478   14940 fix.go:56] fixHost completed within 1m24.4802995s
	I1226 23:20:23.287478   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:25.487278   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:25.487278   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:25.487278   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:28.112520   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:28.112520   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:28.118459   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:20:28.119246   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.57 22 <nil> <nil>}
	I1226 23:20:28.119391   14940 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1226 23:20:28.273873   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703632828.271046441
	
	I1226 23:20:28.273873   14940 fix.go:206] guest clock: 1703632828.271046441
	I1226 23:20:28.273873   14940 fix.go:219] Guest: 2023-12-26 23:20:28.271046441 +0000 UTC Remote: 2023-12-26 23:20:23.2874786 +0000 UTC m=+90.111289801 (delta=4.983567841s)
	I1226 23:20:28.274010   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:30.467819   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:30.467819   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:30.467819   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:33.076617   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:33.076617   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:33.082201   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:20:33.082991   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.57 22 <nil> <nil>}
	I1226 23:20:33.082991   14940 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1703632828
	I1226 23:20:33.248229   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 26 23:20:28 UTC 2023
	
	I1226 23:20:33.248229   14940 fix.go:226] clock set: Tue Dec 26 23:20:28 UTC 2023
	 (err=<nil>)
	I1226 23:20:33.248229   14940 start.go:83] releasing machines lock for "multinode-455300", held for 1m34.4410521s
	I1226 23:20:33.248770   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:35.389396   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:35.389628   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:35.389628   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:37.982522   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:37.982522   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:37.987492   14940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 23:20:37.987492   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:37.999146   14940 ssh_runner.go:195] Run: cat /version.json
	I1226 23:20:37.999146   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:40.220643   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:40.220736   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:40.220815   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:40.220815   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:40.220920   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:40.220920   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:42.932082   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:42.932281   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:42.932524   14940 sshutil.go:53] new ssh client: &{IP:172.21.182.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 23:20:42.951984   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:42.951984   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:42.951984   14940 sshutil.go:53] new ssh client: &{IP:172.21.182.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 23:20:43.032777   14940 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702660877-17806", "minikube_version": "v1.32.0", "commit": "957da21b08687cca2533dd65b67e68ead277b79e"}
	I1226 23:20:43.032864   14940 ssh_runner.go:235] Completed: cat /version.json: (5.0337188s)
	I1226 23:20:43.046955   14940 ssh_runner.go:195] Run: systemctl --version
	I1226 23:20:43.138625   14940 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1226 23:20:43.138802   14940 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1511344s)
	I1226 23:20:43.138802   14940 command_runner.go:130] > systemd 247 (247)
	I1226 23:20:43.138891   14940 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1226 23:20:43.152571   14940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 23:20:43.164755   14940 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1226 23:20:43.165509   14940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1226 23:20:43.178888   14940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 23:20:43.203771   14940 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1226 23:20:43.203771   14940 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1226 23:20:43.203771   14940 start.go:475] detecting cgroup driver to use...
	I1226 23:20:43.203771   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 23:20:43.233202   14940 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1226 23:20:43.246065   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1226 23:20:43.277411   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1226 23:20:43.294174   14940 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1226 23:20:43.307588   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1226 23:20:43.336597   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 23:20:43.370359   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1226 23:20:43.400645   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 23:20:43.430141   14940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 23:20:43.461011   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1226 23:20:43.493760   14940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 23:20:43.510041   14940 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1226 23:20:43.523806   14940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 23:20:43.553234   14940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:20:43.724950   14940 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1226 23:20:43.752033   14940 start.go:475] detecting cgroup driver to use...
	I1226 23:20:43.767045   14940 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1226 23:20:43.791954   14940 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1226 23:20:43.791954   14940 command_runner.go:130] > [Unit]
	I1226 23:20:43.791954   14940 command_runner.go:130] > Description=Docker Application Container Engine
	I1226 23:20:43.791954   14940 command_runner.go:130] > Documentation=https://docs.docker.com
	I1226 23:20:43.791954   14940 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1226 23:20:43.791954   14940 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1226 23:20:43.791954   14940 command_runner.go:130] > StartLimitBurst=3
	I1226 23:20:43.791954   14940 command_runner.go:130] > StartLimitIntervalSec=60
	I1226 23:20:43.791954   14940 command_runner.go:130] > [Service]
	I1226 23:20:43.791954   14940 command_runner.go:130] > Type=notify
	I1226 23:20:43.791954   14940 command_runner.go:130] > Restart=on-failure
	I1226 23:20:43.791954   14940 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1226 23:20:43.791954   14940 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1226 23:20:43.791954   14940 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1226 23:20:43.791954   14940 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1226 23:20:43.791954   14940 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1226 23:20:43.791954   14940 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1226 23:20:43.791954   14940 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1226 23:20:43.791954   14940 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1226 23:20:43.791954   14940 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1226 23:20:43.791954   14940 command_runner.go:130] > ExecStart=
	I1226 23:20:43.791954   14940 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1226 23:20:43.791954   14940 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1226 23:20:43.791954   14940 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1226 23:20:43.791954   14940 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1226 23:20:43.791954   14940 command_runner.go:130] > LimitNOFILE=infinity
	I1226 23:20:43.791954   14940 command_runner.go:130] > LimitNPROC=infinity
	I1226 23:20:43.791954   14940 command_runner.go:130] > LimitCORE=infinity
	I1226 23:20:43.791954   14940 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1226 23:20:43.791954   14940 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1226 23:20:43.791954   14940 command_runner.go:130] > TasksMax=infinity
	I1226 23:20:43.791954   14940 command_runner.go:130] > TimeoutStartSec=0
	I1226 23:20:43.791954   14940 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1226 23:20:43.791954   14940 command_runner.go:130] > Delegate=yes
	I1226 23:20:43.791954   14940 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1226 23:20:43.791954   14940 command_runner.go:130] > KillMode=process
	I1226 23:20:43.791954   14940 command_runner.go:130] > [Install]
	I1226 23:20:43.791954   14940 command_runner.go:130] > WantedBy=multi-user.target
	I1226 23:20:43.806265   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 23:20:43.840822   14940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 23:20:43.888855   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 23:20:43.924775   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1226 23:20:43.961013   14940 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1226 23:20:44.022763   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1226 23:20:44.044977   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 23:20:44.076095   14940 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1226 23:20:44.089733   14940 ssh_runner.go:195] Run: which cri-dockerd
	I1226 23:20:44.095990   14940 command_runner.go:130] > /usr/bin/cri-dockerd
	I1226 23:20:44.110463   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1226 23:20:44.129679   14940 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1226 23:20:44.173364   14940 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1226 23:20:44.349002   14940 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1226 23:20:44.513856   14940 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1226 23:20:44.514108   14940 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1226 23:20:44.561218   14940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:20:44.740859   14940 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1226 23:20:46.457974   14940 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.7171149s)
	I1226 23:20:46.475803   14940 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1226 23:20:46.667076   14940 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1226 23:20:46.853889   14940 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1226 23:20:47.032893   14940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:20:47.216598   14940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1226 23:20:47.255948   14940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:20:47.449102   14940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1226 23:20:47.561358   14940 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1226 23:20:47.575209   14940 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1226 23:20:47.583211   14940 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1226 23:20:47.583375   14940 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1226 23:20:47.583375   14940 command_runner.go:130] > Device: 16h/22d	Inode: 898         Links: 1
	I1226 23:20:47.583375   14940 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1226 23:20:47.583451   14940 command_runner.go:130] > Access: 2023-12-26 23:20:47.468923428 +0000
	I1226 23:20:47.583451   14940 command_runner.go:130] > Modify: 2023-12-26 23:20:47.468923428 +0000
	I1226 23:20:47.583480   14940 command_runner.go:130] > Change: 2023-12-26 23:20:47.473923428 +0000
	I1226 23:20:47.583480   14940 command_runner.go:130] >  Birth: -
	I1226 23:20:47.583978   14940 start.go:543] Will wait 60s for crictl version
	I1226 23:20:47.598353   14940 ssh_runner.go:195] Run: which crictl
	I1226 23:20:47.603460   14940 command_runner.go:130] > /usr/bin/crictl
	I1226 23:20:47.616646   14940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 23:20:47.693255   14940 command_runner.go:130] > Version:  0.1.0
	I1226 23:20:47.693336   14940 command_runner.go:130] > RuntimeName:  docker
	I1226 23:20:47.693336   14940 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1226 23:20:47.693336   14940 command_runner.go:130] > RuntimeApiVersion:  v1
	I1226 23:20:47.693443   14940 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1226 23:20:47.704313   14940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1226 23:20:47.739368   14940 command_runner.go:130] > 24.0.7
	I1226 23:20:47.750325   14940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1226 23:20:47.784451   14940 command_runner.go:130] > 24.0.7
	I1226 23:20:47.789113   14940 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1226 23:20:47.789113   14940 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1226 23:20:47.795251   14940 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1226 23:20:47.795502   14940 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1226 23:20:47.795502   14940 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1226 23:20:47.795502   14940 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4e:ec:d4 Flags:up|broadcast|multicast|running}
	I1226 23:20:47.799213   14940 ip.go:210] interface addr: fe80::1f69:6bdb:2000:8fcd/64
	I1226 23:20:47.799213   14940 ip.go:210] interface addr: 172.21.176.1/20
	I1226 23:20:47.811837   14940 ssh_runner.go:195] Run: grep 172.21.176.1	host.minikube.internal$ /etc/hosts
	I1226 23:20:47.818457   14940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.21.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 23:20:47.837668   14940 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 23:20:47.847599   14940 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1226 23:20:47.875233   14940 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1226 23:20:47.875233   14940 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1226 23:20:47.875233   14940 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1226 23:20:47.875233   14940 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1226 23:20:47.875233   14940 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1226 23:20:47.875351   14940 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1226 23:20:47.875351   14940 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1226 23:20:47.875351   14940 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1226 23:20:47.875351   14940 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 23:20:47.875351   14940 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1226 23:20:47.875445   14940 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1226 23:20:47.875445   14940 docker.go:601] Images already preloaded, skipping extraction
	I1226 23:20:47.884322   14940 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1226 23:20:47.909964   14940 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1226 23:20:47.909964   14940 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1226 23:20:47.909964   14940 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1226 23:20:47.909964   14940 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1226 23:20:47.909964   14940 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1226 23:20:47.909964   14940 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1226 23:20:47.909964   14940 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1226 23:20:47.909964   14940 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1226 23:20:47.909964   14940 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 23:20:47.909964   14940 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1226 23:20:47.909964   14940 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1226 23:20:47.909964   14940 cache_images.go:84] Images are preloaded, skipping loading
	I1226 23:20:47.918964   14940 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1226 23:20:47.955003   14940 command_runner.go:130] > cgroupfs
	I1226 23:20:47.955677   14940 cni.go:84] Creating CNI manager for ""
	I1226 23:20:47.956008   14940 cni.go:136] 3 nodes found, recommending kindnet
	I1226 23:20:47.956008   14940 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 23:20:47.956008   14940 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.21.182.57 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-455300 NodeName:multinode-455300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.21.182.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.21.182.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1226 23:20:47.956483   14940 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.21.182.57
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-455300"
	  kubeletExtraArgs:
	    node-ip: 172.21.182.57
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.21.182.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 23:20:47.956759   14940 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-455300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.21.182.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-455300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 23:20:47.970960   14940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1226 23:20:47.989034   14940 command_runner.go:130] > kubeadm
	I1226 23:20:47.989069   14940 command_runner.go:130] > kubectl
	I1226 23:20:47.989069   14940 command_runner.go:130] > kubelet
	I1226 23:20:47.989115   14940 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 23:20:48.002719   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1226 23:20:48.018037   14940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1226 23:20:48.045454   14940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1226 23:20:48.074413   14940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1226 23:20:48.122052   14940 ssh_runner.go:195] Run: grep 172.21.182.57	control-plane.minikube.internal$ /etc/hosts
	I1226 23:20:48.128839   14940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.21.182.57	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 23:20:48.147956   14940 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300 for IP: 172.21.182.57
	I1226 23:20:48.147956   14940 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 23:20:48.148147   14940 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1226 23:20:48.148963   14940 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1226 23:20:48.149858   14940 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\client.key
	I1226 23:20:48.149968   14940 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key.76181380
	I1226 23:20:48.150135   14940 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt.76181380 with IP's: [172.21.182.57 10.96.0.1 127.0.0.1 10.0.0.1]
	I1226 23:20:48.313557   14940 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt.76181380 ...
	I1226 23:20:48.314562   14940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt.76181380: {Name:mk331fe892099c0aec4f61b69d60598dd6a86faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 23:20:48.315586   14940 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key.76181380 ...
	I1226 23:20:48.315586   14940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key.76181380: {Name:mk9ce275ae6084ede4e9476a8540b9bee334314d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 23:20:48.316554   14940 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt.76181380 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt
	I1226 23:20:48.329559   14940 certs.go:341] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key.76181380 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key
	I1226 23:20:48.330575   14940 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.key
	I1226 23:20:48.330575   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1226 23:20:48.330575   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1226 23:20:48.331628   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1226 23:20:48.331628   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1226 23:20:48.332343   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1226 23:20:48.332343   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1226 23:20:48.332343   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1226 23:20:48.332343   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1226 23:20:48.333008   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem (1338 bytes)
	W1226 23:20:48.333008   14940 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728_empty.pem, impossibly tiny 0 bytes
	I1226 23:20:48.333670   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1226 23:20:48.333842   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1226 23:20:48.333842   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1226 23:20:48.334475   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1226 23:20:48.335066   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem (1708 bytes)
	I1226 23:20:48.335380   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> /usr/share/ca-certificates/107282.pem
	I1226 23:20:48.335380   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:20:48.335962   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem -> /usr/share/ca-certificates/10728.pem
	I1226 23:20:48.336667   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1226 23:20:48.380971   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1226 23:20:48.424521   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1226 23:20:48.463972   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1226 23:20:48.513727   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 23:20:48.556351   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 23:20:48.596579   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 23:20:48.640632   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1226 23:20:48.681514   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /usr/share/ca-certificates/107282.pem (1708 bytes)
	I1226 23:20:48.720880   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 23:20:48.760674   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem --> /usr/share/ca-certificates/10728.pem (1338 bytes)
	I1226 23:20:48.800891   14940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1226 23:20:48.842926   14940 ssh_runner.go:195] Run: openssl version
	I1226 23:20:48.853228   14940 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1226 23:20:48.866966   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 23:20:48.898433   14940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:20:48.905828   14940 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 26 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:20:48.905828   14940 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:20:48.919244   14940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:20:48.926722   14940 command_runner.go:130] > b5213941
	I1226 23:20:48.940737   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 23:20:48.972434   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10728.pem && ln -fs /usr/share/ca-certificates/10728.pem /etc/ssl/certs/10728.pem"
	I1226 23:20:49.005627   14940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10728.pem
	I1226 23:20:49.012588   14940 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 26 22:04 /usr/share/ca-certificates/10728.pem
	I1226 23:20:49.012588   14940 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 26 22:04 /usr/share/ca-certificates/10728.pem
	I1226 23:20:49.025587   14940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10728.pem
	I1226 23:20:49.032630   14940 command_runner.go:130] > 51391683
	I1226 23:20:49.049709   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10728.pem /etc/ssl/certs/51391683.0"
	I1226 23:20:49.082696   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107282.pem && ln -fs /usr/share/ca-certificates/107282.pem /etc/ssl/certs/107282.pem"
	I1226 23:20:49.113188   14940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107282.pem
	I1226 23:20:49.119749   14940 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 26 22:04 /usr/share/ca-certificates/107282.pem
	I1226 23:20:49.120581   14940 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 26 22:04 /usr/share/ca-certificates/107282.pem
	I1226 23:20:49.133231   14940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107282.pem
	I1226 23:20:49.141894   14940 command_runner.go:130] > 3ec20f2e
	I1226 23:20:49.155041   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107282.pem /etc/ssl/certs/3ec20f2e.0"
	I1226 23:20:49.186181   14940 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 23:20:49.193198   14940 command_runner.go:130] > ca.crt
	I1226 23:20:49.193198   14940 command_runner.go:130] > ca.key
	I1226 23:20:49.193198   14940 command_runner.go:130] > healthcheck-client.crt
	I1226 23:20:49.193198   14940 command_runner.go:130] > healthcheck-client.key
	I1226 23:20:49.193198   14940 command_runner.go:130] > peer.crt
	I1226 23:20:49.193198   14940 command_runner.go:130] > peer.key
	I1226 23:20:49.193198   14940 command_runner.go:130] > server.crt
	I1226 23:20:49.193198   14940 command_runner.go:130] > server.key
	I1226 23:20:49.206310   14940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1226 23:20:49.215198   14940 command_runner.go:130] > Certificate will not expire
	I1226 23:20:49.227278   14940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1226 23:20:49.235982   14940 command_runner.go:130] > Certificate will not expire
	I1226 23:20:49.249364   14940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1226 23:20:49.256305   14940 command_runner.go:130] > Certificate will not expire
	I1226 23:20:49.269842   14940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1226 23:20:49.278009   14940 command_runner.go:130] > Certificate will not expire
	I1226 23:20:49.293716   14940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1226 23:20:49.303323   14940 command_runner.go:130] > Certificate will not expire
	I1226 23:20:49.316009   14940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1226 23:20:49.324851   14940 command_runner.go:130] > Certificate will not expire
	I1226 23:20:49.325787   14940 kubeadm.go:404] StartCluster: {Name:multinode-455300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-455300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.21.182.57 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.21.187.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.21.188.21 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 23:20:49.336039   14940 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1226 23:20:49.380428   14940 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1226 23:20:49.399417   14940 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1226 23:20:49.399417   14940 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1226 23:20:49.399417   14940 command_runner.go:130] > /var/lib/minikube/etcd:
	I1226 23:20:49.399417   14940 command_runner.go:130] > member
	I1226 23:20:49.399417   14940 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1226 23:20:49.399417   14940 kubeadm.go:636] restartCluster start
	I1226 23:20:49.412418   14940 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1226 23:20:49.427741   14940 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1226 23:20:49.428800   14940 kubeconfig.go:135] verify returned: extract IP: "multinode-455300" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:20:49.429154   14940 kubeconfig.go:146] "multinode-455300" context is missing from C:\Users\jenkins.minikube1\minikube-integration\kubeconfig - will repair!
	I1226 23:20:49.429376   14940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 23:20:49.443034   14940 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:20:49.444020   14940 kapi.go:59] client config for multinode-455300: &rest.Config{Host:"https://172.21.182.57:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 23:20:49.446115   14940 cert_rotation.go:137] Starting client certificate rotation controller
	I1226 23:20:49.457631   14940 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1226 23:20:49.476514   14940 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I1226 23:20:49.476514   14940 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I1226 23:20:49.476514   14940 command_runner.go:130] > @@ -1,7 +1,7 @@
	I1226 23:20:49.476514   14940 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I1226 23:20:49.476514   14940 command_runner.go:130] >  kind: InitConfiguration
	I1226 23:20:49.476514   14940 command_runner.go:130] >  localAPIEndpoint:
	I1226 23:20:49.476514   14940 command_runner.go:130] > -  advertiseAddress: 172.21.184.4
	I1226 23:20:49.476514   14940 command_runner.go:130] > +  advertiseAddress: 172.21.182.57
	I1226 23:20:49.476514   14940 command_runner.go:130] >    bindPort: 8443
	I1226 23:20:49.476514   14940 command_runner.go:130] >  bootstrapTokens:
	I1226 23:20:49.476514   14940 command_runner.go:130] >    - groups:
	I1226 23:20:49.476514   14940 command_runner.go:130] > @@ -14,13 +14,13 @@
	I1226 23:20:49.476514   14940 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I1226 23:20:49.476514   14940 command_runner.go:130] >    name: "multinode-455300"
	I1226 23:20:49.476514   14940 command_runner.go:130] >    kubeletExtraArgs:
	I1226 23:20:49.476514   14940 command_runner.go:130] > -    node-ip: 172.21.184.4
	I1226 23:20:49.476514   14940 command_runner.go:130] > +    node-ip: 172.21.182.57
	I1226 23:20:49.476514   14940 command_runner.go:130] >    taints: []
	I1226 23:20:49.476514   14940 command_runner.go:130] >  ---
	I1226 23:20:49.476514   14940 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I1226 23:20:49.476514   14940 command_runner.go:130] >  kind: ClusterConfiguration
	I1226 23:20:49.476514   14940 command_runner.go:130] >  apiServer:
	I1226 23:20:49.476514   14940 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.21.184.4"]
	I1226 23:20:49.476514   14940 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.21.182.57"]
	I1226 23:20:49.476514   14940 command_runner.go:130] >    extraArgs:
	I1226 23:20:49.476514   14940 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I1226 23:20:49.476514   14940 command_runner.go:130] >  controllerManager:
	I1226 23:20:49.476514   14940 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.21.184.4
	+  advertiseAddress: 172.21.182.57
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-455300"
	   kubeletExtraArgs:
	-    node-ip: 172.21.184.4
	+    node-ip: 172.21.182.57
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.21.184.4"]
	+  certSANs: ["127.0.0.1", "localhost", "172.21.182.57"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I1226 23:20:49.476514   14940 kubeadm.go:1135] stopping kube-system containers ...
	I1226 23:20:49.487715   14940 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1226 23:20:49.517857   14940 command_runner.go:130] > 5944000e150d
	I1226 23:20:49.518750   14940 command_runner.go:130] > c49ce5a60988
	I1226 23:20:49.518750   14940 command_runner.go:130] > 94c58afb0b3a
	I1226 23:20:49.518750   14940 command_runner.go:130] > 58a2f8149f7f
	I1226 23:20:49.518750   14940 command_runner.go:130] > 5e6fbedb8b41
	I1226 23:20:49.518750   14940 command_runner.go:130] > de1e7a6bed71
	I1226 23:20:49.518750   14940 command_runner.go:130] > 6374d63f4880
	I1226 23:20:49.518750   14940 command_runner.go:130] > e74bc4380f45
	I1226 23:20:49.518750   14940 command_runner.go:130] > 2c33bdd1003a
	I1226 23:20:49.518750   14940 command_runner.go:130] > 239b6c40fa39
	I1226 23:20:49.518750   14940 command_runner.go:130] > 9a1fd87d0726
	I1226 23:20:49.518750   14940 command_runner.go:130] > 0d2ca397ea4b
	I1226 23:20:49.518750   14940 command_runner.go:130] > dd32942a9720
	I1226 23:20:49.518750   14940 command_runner.go:130] > 2303b2b6305d
	I1226 23:20:49.518750   14940 command_runner.go:130] > f18330f939ce
	I1226 23:20:49.518750   14940 command_runner.go:130] > d6f5bd631857
	I1226 23:20:49.519056   14940 docker.go:469] Stopping containers: [5944000e150d c49ce5a60988 94c58afb0b3a 58a2f8149f7f 5e6fbedb8b41 de1e7a6bed71 6374d63f4880 e74bc4380f45 2c33bdd1003a 239b6c40fa39 9a1fd87d0726 0d2ca397ea4b dd32942a9720 2303b2b6305d f18330f939ce d6f5bd631857]
	I1226 23:20:49.530540   14940 ssh_runner.go:195] Run: docker stop 5944000e150d c49ce5a60988 94c58afb0b3a 58a2f8149f7f 5e6fbedb8b41 de1e7a6bed71 6374d63f4880 e74bc4380f45 2c33bdd1003a 239b6c40fa39 9a1fd87d0726 0d2ca397ea4b dd32942a9720 2303b2b6305d f18330f939ce d6f5bd631857
	I1226 23:20:49.560278   14940 command_runner.go:130] > 5944000e150d
	I1226 23:20:49.560331   14940 command_runner.go:130] > c49ce5a60988
	I1226 23:20:49.560331   14940 command_runner.go:130] > 94c58afb0b3a
	I1226 23:20:49.560331   14940 command_runner.go:130] > 58a2f8149f7f
	I1226 23:20:49.560391   14940 command_runner.go:130] > 5e6fbedb8b41
	I1226 23:20:49.560391   14940 command_runner.go:130] > de1e7a6bed71
	I1226 23:20:49.560391   14940 command_runner.go:130] > 6374d63f4880
	I1226 23:20:49.560391   14940 command_runner.go:130] > e74bc4380f45
	I1226 23:20:49.560429   14940 command_runner.go:130] > 2c33bdd1003a
	I1226 23:20:49.560429   14940 command_runner.go:130] > 239b6c40fa39
	I1226 23:20:49.560429   14940 command_runner.go:130] > 9a1fd87d0726
	I1226 23:20:49.560429   14940 command_runner.go:130] > 0d2ca397ea4b
	I1226 23:20:49.560429   14940 command_runner.go:130] > dd32942a9720
	I1226 23:20:49.560429   14940 command_runner.go:130] > 2303b2b6305d
	I1226 23:20:49.560429   14940 command_runner.go:130] > f18330f939ce
	I1226 23:20:49.560429   14940 command_runner.go:130] > d6f5bd631857
	I1226 23:20:49.573611   14940 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1226 23:20:49.613888   14940 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1226 23:20:49.631181   14940 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1226 23:20:49.631181   14940 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1226 23:20:49.631181   14940 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1226 23:20:49.631181   14940 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 23:20:49.631181   14940 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 23:20:49.645688   14940 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1226 23:20:49.662008   14940 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1226 23:20:49.662046   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1226 23:20:50.077512   14940 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1226 23:20:50.077595   14940 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1226 23:20:50.077595   14940 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1226 23:20:50.077595   14940 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1226 23:20:50.077639   14940 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1226 23:20:50.077673   14940 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1226 23:20:50.077673   14940 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1226 23:20:50.077673   14940 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1226 23:20:50.077673   14940 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1226 23:20:50.077673   14940 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1226 23:20:50.077673   14940 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1226 23:20:50.077673   14940 command_runner.go:130] > [certs] Using the existing "sa" key
	I1226 23:20:50.077673   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1226 23:20:51.599852   14940 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1226 23:20:51.599852   14940 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1226 23:20:51.599852   14940 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1226 23:20:51.599852   14940 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1226 23:20:51.600011   14940 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1226 23:20:51.600011   14940 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.5223381s)
	I1226 23:20:51.600011   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1226 23:20:51.877274   14940 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 23:20:51.877432   14940 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 23:20:51.877432   14940 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1226 23:20:51.877523   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1226 23:20:51.974976   14940 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1226 23:20:51.974976   14940 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1226 23:20:51.974976   14940 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1226 23:20:51.974976   14940 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1226 23:20:51.974976   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1226 23:20:52.063360   14940 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1226 23:20:52.063360   14940 api_server.go:52] waiting for apiserver process to appear ...
	I1226 23:20:52.076364   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:20:52.588295   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:20:53.083794   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:20:53.582975   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:20:54.092938   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:20:54.587224   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:20:55.091307   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:20:55.584622   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:20:55.612009   14940 command_runner.go:130] > 1851
	I1226 23:20:55.612009   14940 api_server.go:72] duration metric: took 3.5486495s to wait for apiserver process to appear ...
	I1226 23:20:55.612167   14940 api_server.go:88] waiting for apiserver healthz status ...
	I1226 23:20:55.612167   14940 api_server.go:253] Checking apiserver healthz at https://172.21.182.57:8443/healthz ...
	I1226 23:20:59.671666   14940 api_server.go:279] https://172.21.182.57:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1226 23:20:59.671666   14940 api_server.go:103] status: https://172.21.182.57:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1226 23:20:59.672026   14940 api_server.go:253] Checking apiserver healthz at https://172.21.182.57:8443/healthz ...
	I1226 23:20:59.714417   14940 api_server.go:279] https://172.21.182.57:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1226 23:20:59.714914   14940 api_server.go:103] status: https://172.21.182.57:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1226 23:21:00.119743   14940 api_server.go:253] Checking apiserver healthz at https://172.21.182.57:8443/healthz ...
	I1226 23:21:00.128570   14940 api_server.go:279] https://172.21.182.57:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1226 23:21:00.128687   14940 api_server.go:103] status: https://172.21.182.57:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1226 23:21:00.618095   14940 api_server.go:253] Checking apiserver healthz at https://172.21.182.57:8443/healthz ...
	I1226 23:21:00.634132   14940 api_server.go:279] https://172.21.182.57:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1226 23:21:00.634578   14940 api_server.go:103] status: https://172.21.182.57:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1226 23:21:01.124147   14940 api_server.go:253] Checking apiserver healthz at https://172.21.182.57:8443/healthz ...
	I1226 23:21:01.140402   14940 api_server.go:279] https://172.21.182.57:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1226 23:21:01.140663   14940 api_server.go:103] status: https://172.21.182.57:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1226 23:21:01.615956   14940 api_server.go:253] Checking apiserver healthz at https://172.21.182.57:8443/healthz ...
	I1226 23:21:01.625174   14940 api_server.go:279] https://172.21.182.57:8443/healthz returned 200:
	ok
	I1226 23:21:01.625562   14940 round_trippers.go:463] GET https://172.21.182.57:8443/version
	I1226 23:21:01.625562   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:01.625562   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:01.625562   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:01.645153   14940 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I1226 23:21:01.645153   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:01.645153   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:01.645153   14940 round_trippers.go:580]     Content-Length: 264
	I1226 23:21:01.645153   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:01 GMT
	I1226 23:21:01.645153   14940 round_trippers.go:580]     Audit-Id: 2e623e6d-fa5a-4564-bba2-14c0b0936dfc
	I1226 23:21:01.646102   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:01.646102   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:01.646102   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:01.646102   14940 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1226 23:21:01.646347   14940 api_server.go:141] control plane version: v1.28.4
	I1226 23:21:01.646539   14940 api_server.go:131] duration metric: took 6.0343739s to wait for apiserver health ...
	I1226 23:21:01.646539   14940 cni.go:84] Creating CNI manager for ""
	I1226 23:21:01.646539   14940 cni.go:136] 3 nodes found, recommending kindnet
	I1226 23:21:01.650361   14940 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1226 23:21:01.666280   14940 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1226 23:21:01.673678   14940 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1226 23:21:01.673732   14940 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1226 23:21:01.673732   14940 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1226 23:21:01.673732   14940 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 23:21:01.673732   14940 command_runner.go:130] > Access: 2023-12-26 23:19:30.718927400 +0000
	I1226 23:21:01.673732   14940 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I1226 23:21:01.673809   14940 command_runner.go:130] > Change: 2023-12-26 23:19:18.490000000 +0000
	I1226 23:21:01.673809   14940 command_runner.go:130] >  Birth: -
	I1226 23:21:01.675152   14940 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1226 23:21:01.675152   14940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1226 23:21:01.726705   14940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1226 23:21:03.860598   14940 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1226 23:21:03.861423   14940 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1226 23:21:03.861423   14940 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1226 23:21:03.861483   14940 command_runner.go:130] > daemonset.apps/kindnet configured
	I1226 23:21:03.861483   14940 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.1347786s)
	I1226 23:21:03.861573   14940 system_pods.go:43] waiting for kube-system pods to appear ...
	I1226 23:21:03.861768   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods
	I1226 23:21:03.861768   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:03.861851   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:03.861851   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:03.867879   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:21:03.867879   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:03.867943   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:03.867943   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:03.867943   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:03 GMT
	I1226 23:21:03.867943   14940 round_trippers.go:580]     Audit-Id: ffffb957-0c6c-41bc-ac27-8354fa858ef7
	I1226 23:21:03.867943   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:03.867943   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:03.869958   14940 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1722"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84131 chars]
	I1226 23:21:03.876434   14940 system_pods.go:59] 12 kube-system pods found
	I1226 23:21:03.876434   14940 system_pods.go:61] "coredns-5dd5756b68-fj9bd" [fbc5229e-2af2-4e17-b23c-ebf836a42aa2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1226 23:21:03.876434   14940 system_pods.go:61] "etcd-multinode-455300" [cfd4d580-b2a8-4ff1-8d3c-c6b50f9bf86e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1226 23:21:03.876434   14940 system_pods.go:61] "kindnet-8jsvj" [376eb267-ce7d-4497-a85e-ff9224a25347] Running
	I1226 23:21:03.876434   14940 system_pods.go:61] "kindnet-zt55b" [43604859-483f-4e92-a16c-d3f30cb6e4f1] Running
	I1226 23:21:03.876434   14940 system_pods.go:61] "kindnet-zxd45" [686e296b-23ae-4a1e-bc14-2dea164b0c29] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1226 23:21:03.876434   14940 system_pods.go:61] "kube-apiserver-multinode-455300" [bbe5516b-f745-4a20-8df3-3cd3ac15d7f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1226 23:21:03.876434   14940 system_pods.go:61] "kube-controller-manager-multinode-455300" [fdaf236b-e792-4278-908c-34b337b97beb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1226 23:21:03.876434   14940 system_pods.go:61] "kube-proxy-2pfcl" [61b5d2fb-802c-4b84-b7fa-7a7e9e024028] Running
	I1226 23:21:03.876434   14940 system_pods.go:61] "kube-proxy-bqlf8" [1caff24c-909f-42a9-a4b8-d9c8c1ec8828] Running
	I1226 23:21:03.876434   14940 system_pods.go:61] "kube-proxy-hzcqb" [0027fd42-fa64-4d1d-acc8-36e7b41e4838] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1226 23:21:03.876434   14940 system_pods.go:61] "kube-scheduler-multinode-455300" [58252b0c-41ef-43ab-b2e8-4bd2a1b21cb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1226 23:21:03.876434   14940 system_pods.go:61] "storage-provisioner" [e274f19d-1940-400d-b887-aaf390e64fdd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1226 23:21:03.876434   14940 system_pods.go:74] duration metric: took 14.8608ms to wait for pod list to return data ...
	I1226 23:21:03.876434   14940 node_conditions.go:102] verifying NodePressure condition ...
	I1226 23:21:03.876975   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes
	I1226 23:21:03.876975   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:03.876975   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:03.876975   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:03.880778   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:03.881202   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:03.881202   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:03.881202   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:03 GMT
	I1226 23:21:03.881202   14940 round_trippers.go:580]     Audit-Id: 60c541fe-1ba1-45b4-aeae-9f94ac186852
	I1226 23:21:03.881202   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:03.881202   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:03.881202   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:03.881596   14940 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1722"},"items":[{"metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14857 chars]
	I1226 23:21:03.883217   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:21:03.883309   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:21:03.883309   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:21:03.883400   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:21:03.883400   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:21:03.883444   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:21:03.883444   14940 node_conditions.go:105] duration metric: took 7.01ms to run NodePressure ...
	I1226 23:21:03.883515   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1226 23:21:04.279641   14940 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1226 23:21:04.279723   14940 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1226 23:21:04.279792   14940 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1226 23:21:04.280023   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I1226 23:21:04.280063   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.280063   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.280063   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.284745   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:04.285084   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.285084   14940 round_trippers.go:580]     Audit-Id: 52e8e3a9-24c4-44bb-a58e-075468a5ab79
	I1226 23:21:04.285084   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.285084   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.285084   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.285171   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.285171   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.285955   14940 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1724"},"items":[{"metadata":{"name":"etcd-multinode-455300","namespace":"kube-system","uid":"cfd4d580-b2a8-4ff1-8d3c-c6b50f9bf86e","resourceVersion":"1717","creationTimestamp":"2023-12-26T23:21:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.21.182.57:2379","kubernetes.io/config.hash":"d4e365efddd1bd5d58716ef3ab8705bc","kubernetes.io/config.mirror":"d4e365efddd1bd5d58716ef3ab8705bc","kubernetes.io/config.seen":"2023-12-26T23:20:52.614240428Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:21:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29350 chars]
	I1226 23:21:04.287435   14940 kubeadm.go:787] kubelet initialised
	I1226 23:21:04.287488   14940 kubeadm.go:788] duration metric: took 7.6597ms waiting for restarted kubelet to initialise ...
	I1226 23:21:04.287488   14940 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 23:21:04.287611   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods
	I1226 23:21:04.287694   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.287694   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.287694   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.297261   14940 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1226 23:21:04.297261   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.297261   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.297261   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.297261   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.297261   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.297261   14940 round_trippers.go:580]     Audit-Id: d994bc68-69cd-4473-b10d-bc2eaa017000
	I1226 23:21:04.297261   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.298249   14940 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1724"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84131 chars]
	I1226 23:21:04.303531   14940 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:04.303717   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:04.303717   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.303717   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.303717   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.308476   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:04.309418   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.309418   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.309418   14940 round_trippers.go:580]     Audit-Id: 94880398-21a1-4ab8-bcbf-53875901d606
	I1226 23:21:04.309418   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.309418   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.309418   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.309418   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.309418   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:04.310478   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:04.310478   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.310478   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.310478   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.313876   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:04.313876   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.313876   14940 round_trippers.go:580]     Audit-Id: 4e73f529-0f8b-4087-9bc9-d2c591dec233
	I1226 23:21:04.313876   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.313876   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.313876   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.313876   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.313876   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.313876   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:04.314884   14940 pod_ready.go:97] node "multinode-455300" hosting pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:04.314884   14940 pod_ready.go:81] duration metric: took 11.2749ms waiting for pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace to be "Ready" ...
	E1226 23:21:04.314884   14940 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-455300" hosting pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:04.314884   14940 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:04.314884   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-455300
	I1226 23:21:04.314884   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.314884   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.314884   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.318961   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:04.319319   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.319319   14940 round_trippers.go:580]     Audit-Id: 2b18f8e8-a7fc-4cfe-b52a-7cefa4e85b39
	I1226 23:21:04.319319   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.319319   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.319319   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.319319   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.319319   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.319575   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-455300","namespace":"kube-system","uid":"cfd4d580-b2a8-4ff1-8d3c-c6b50f9bf86e","resourceVersion":"1717","creationTimestamp":"2023-12-26T23:21:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.21.182.57:2379","kubernetes.io/config.hash":"d4e365efddd1bd5d58716ef3ab8705bc","kubernetes.io/config.mirror":"d4e365efddd1bd5d58716ef3ab8705bc","kubernetes.io/config.seen":"2023-12-26T23:20:52.614240428Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:21:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I1226 23:21:04.319644   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:04.319644   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.319644   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.319644   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.323255   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:04.324117   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.324117   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.324117   14940 round_trippers.go:580]     Audit-Id: bb4e9307-d014-4826-bfa5-51df6c8a614d
	I1226 23:21:04.324117   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.324117   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.324117   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.324117   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.324486   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:04.324624   14940 pod_ready.go:97] node "multinode-455300" hosting pod "etcd-multinode-455300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:04.324624   14940 pod_ready.go:81] duration metric: took 9.7405ms waiting for pod "etcd-multinode-455300" in "kube-system" namespace to be "Ready" ...
	E1226 23:21:04.324624   14940 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-455300" hosting pod "etcd-multinode-455300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:04.324624   14940 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:04.324624   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-455300
	I1226 23:21:04.324624   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.324624   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.324624   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.328209   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:04.328209   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.328209   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.328209   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.328209   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.328209   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.328209   14940 round_trippers.go:580]     Audit-Id: 4ac558de-73a9-4fa2-8a3a-cd4da867bc95
	I1226 23:21:04.328209   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.328209   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-455300","namespace":"kube-system","uid":"bbe5516b-f745-4a20-8df3-3cd3ac15d7f6","resourceVersion":"1718","creationTimestamp":"2023-12-26T23:21:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.21.182.57:8443","kubernetes.io/config.hash":"3c3021eb66c7ec14c35bcec06843a329","kubernetes.io/config.mirror":"3c3021eb66c7ec14c35bcec06843a329","kubernetes.io/config.seen":"2023-12-26T23:20:52.614245928Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:21:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7644 chars]
	I1226 23:21:04.329215   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:04.329215   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.329215   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.329215   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.333214   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:04.333327   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.333327   14940 round_trippers.go:580]     Audit-Id: 6b5291d2-7aa6-48a5-ba78-504d1b1a392f
	I1226 23:21:04.333327   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.333401   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.333401   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.333401   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.333401   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.333401   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:04.333939   14940 pod_ready.go:97] node "multinode-455300" hosting pod "kube-apiserver-multinode-455300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:04.334100   14940 pod_ready.go:81] duration metric: took 9.4758ms waiting for pod "kube-apiserver-multinode-455300" in "kube-system" namespace to be "Ready" ...
	E1226 23:21:04.334100   14940 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-455300" hosting pod "kube-apiserver-multinode-455300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:04.334100   14940 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:04.334185   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-455300
	I1226 23:21:04.334249   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.334249   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.334295   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.339452   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:21:04.339452   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.339452   14940 round_trippers.go:580]     Audit-Id: b21030d5-2e33-48df-b8cf-af15c479cdf3
	I1226 23:21:04.339452   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.339993   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.339993   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.339993   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.339993   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.340359   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-455300","namespace":"kube-system","uid":"fdaf236b-e792-4278-908c-34b337b97beb","resourceVersion":"1710","creationTimestamp":"2023-12-26T22:58:13Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"390e7b7338e932006306396347e13bcf","kubernetes.io/config.mirror":"390e7b7338e932006306396347e13bcf","kubernetes.io/config.seen":"2023-12-26T22:58:06.456140564Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1226 23:21:04.340963   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:04.341003   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.341044   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.341044   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.344820   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:04.344820   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.344877   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.344877   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.344877   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.344918   14940 round_trippers.go:580]     Audit-Id: bb155b3b-b54c-4535-9762-5a011f8faf6b
	I1226 23:21:04.344918   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.344918   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.345882   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:04.346319   14940 pod_ready.go:97] node "multinode-455300" hosting pod "kube-controller-manager-multinode-455300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:04.346319   14940 pod_ready.go:81] duration metric: took 12.2191ms waiting for pod "kube-controller-manager-multinode-455300" in "kube-system" namespace to be "Ready" ...
	E1226 23:21:04.346319   14940 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-455300" hosting pod "kube-controller-manager-multinode-455300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:04.346319   14940 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2pfcl" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:04.481373   14940 request.go:629] Waited for 134.7552ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pfcl
	I1226 23:21:04.481535   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pfcl
	I1226 23:21:04.481535   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.481535   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.481535   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.487102   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:21:04.487102   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.487102   14940 round_trippers.go:580]     Audit-Id: 83225397-ba7b-40c6-9cca-3e80ab93ddcb
	I1226 23:21:04.487649   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.487649   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.487649   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.487649   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.487649   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.488267   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2pfcl","generateName":"kube-proxy-","namespace":"kube-system","uid":"61b5d2fb-802c-4b84-b7fa-7a7e9e024028","resourceVersion":"1631","creationTimestamp":"2023-12-26T23:06:10Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:06:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I1226 23:21:04.684794   14940 request.go:629] Waited for 195.8092ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m03
	I1226 23:21:04.685138   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m03
	I1226 23:21:04.685138   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.685138   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.685138   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.691758   14940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 23:21:04.691864   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.691864   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.691864   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.691864   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.691934   14940 round_trippers.go:580]     Audit-Id: 0d223557-b159-4995-900f-83c9b094ee2d
	I1226 23:21:04.691934   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.691934   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.691934   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m03","uid":"ef364efe-5dc7-4fb4-bc7c-76a3eaa41ba4","resourceVersion":"1649","creationTimestamp":"2023-12-26T23:16:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_16_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:16:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3636 chars]
	I1226 23:21:04.692463   14940 pod_ready.go:92] pod "kube-proxy-2pfcl" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:04.692463   14940 pod_ready.go:81] duration metric: took 346.1436ms waiting for pod "kube-proxy-2pfcl" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:04.692463   14940 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bqlf8" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:04.889510   14940 request.go:629] Waited for 196.7269ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqlf8
	I1226 23:21:04.889871   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqlf8
	I1226 23:21:04.889907   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.889907   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.889978   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.894314   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:04.894314   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.894314   14940 round_trippers.go:580]     Audit-Id: 7cbd3e3f-fdd5-4521-88f2-83ba565b6a4e
	I1226 23:21:04.894314   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.894930   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.894930   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.894930   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.894930   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.895357   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bqlf8","generateName":"kube-proxy-","namespace":"kube-system","uid":"1caff24c-909f-42a9-a4b8-d9c8c1ec8828","resourceVersion":"635","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I1226 23:21:05.093494   14940 request.go:629] Waited for 197.2379ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:21:05.093629   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:21:05.093629   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:05.093629   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:05.093682   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:05.097038   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:05.097038   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:05.097038   14940 round_trippers.go:580]     Audit-Id: b6f98693-ab5b-43c5-b2eb-76034e9076e8
	I1226 23:21:05.097038   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:05.097038   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:05.097038   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:05.097038   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:05.097038   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:05 GMT
	I1226 23:21:05.097694   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"1620","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_16_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3819 chars]
	I1226 23:21:05.098217   14940 pod_ready.go:92] pod "kube-proxy-bqlf8" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:05.098217   14940 pod_ready.go:81] duration metric: took 405.7545ms waiting for pod "kube-proxy-bqlf8" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:05.098336   14940 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hzcqb" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:05.283326   14940 request.go:629] Waited for 184.727ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzcqb
	I1226 23:21:05.283447   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzcqb
	I1226 23:21:05.283447   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:05.283600   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:05.283692   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:05.287354   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:05.287354   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:05.287354   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:05.288238   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:05 GMT
	I1226 23:21:05.288238   14940 round_trippers.go:580]     Audit-Id: d370c4f1-ca77-4cef-8c37-68de8a734069
	I1226 23:21:05.288238   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:05.288238   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:05.288311   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:05.288660   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hzcqb","generateName":"kube-proxy-","namespace":"kube-system","uid":"0027fd42-fa64-4d1d-acc8-36e7b41e4838","resourceVersion":"1715","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5929 chars]
	I1226 23:21:05.488850   14940 request.go:629] Waited for 198.913ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:05.488850   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:05.488850   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:05.488850   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:05.488850   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:05.496833   14940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 23:21:05.496833   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:05.496833   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:05.496833   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:05.496833   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:05 GMT
	I1226 23:21:05.496833   14940 round_trippers.go:580]     Audit-Id: 94666ccb-a760-4f83-9be8-31e08a69e36a
	I1226 23:21:05.496833   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:05.496833   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:05.496833   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:05.497830   14940 pod_ready.go:97] node "multinode-455300" hosting pod "kube-proxy-hzcqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:05.497830   14940 pod_ready.go:81] duration metric: took 399.4942ms waiting for pod "kube-proxy-hzcqb" in "kube-system" namespace to be "Ready" ...
	E1226 23:21:05.497830   14940 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-455300" hosting pod "kube-proxy-hzcqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:05.497830   14940 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:05.695128   14940 request.go:629] Waited for 197.0316ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-455300
	I1226 23:21:05.695276   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-455300
	I1226 23:21:05.695276   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:05.695276   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:05.695276   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:05.698894   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:05.698894   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:05.698894   14940 round_trippers.go:580]     Audit-Id: 2121d241-7895-4b34-ac5f-3ebabe01122e
	I1226 23:21:05.698894   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:05.698894   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:05.698894   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:05.698894   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:05.698894   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:05 GMT
	I1226 23:21:05.698894   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-455300","namespace":"kube-system","uid":"58252b0c-41ef-43ab-b2e8-4bd2a1b21cb1","resourceVersion":"1711","creationTimestamp":"2023-12-26T22:58:17Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ebadb7d5fc522150603669ea98264147","kubernetes.io/config.mirror":"ebadb7d5fc522150603669ea98264147","kubernetes.io/config.seen":"2023-12-26T22:58:16.785831210Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I1226 23:21:05.881022   14940 request.go:629] Waited for 181.139ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:05.881179   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:05.881179   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:05.881179   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:05.881179   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:05.885803   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:05.885803   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:05.886360   14940 round_trippers.go:580]     Audit-Id: b4589e11-d1ce-4916-8e27-355a74a2a66a
	I1226 23:21:05.886360   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:05.886360   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:05.886360   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:05.886360   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:05.886360   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:05 GMT
	I1226 23:21:05.886571   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:05.887050   14940 pod_ready.go:97] node "multinode-455300" hosting pod "kube-scheduler-multinode-455300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:05.887050   14940 pod_ready.go:81] duration metric: took 389.2194ms waiting for pod "kube-scheduler-multinode-455300" in "kube-system" namespace to be "Ready" ...
	E1226 23:21:05.887161   14940 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-455300" hosting pod "kube-scheduler-multinode-455300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:05.887161   14940 pod_ready.go:38] duration metric: took 1.5996091s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 23:21:05.887161   14940 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1226 23:21:05.901927   14940 command_runner.go:130] > -16
	I1226 23:21:05.902568   14940 ops.go:34] apiserver oom_adj: -16
	I1226 23:21:05.902655   14940 kubeadm.go:640] restartCluster took 16.5031542s
	I1226 23:21:05.902655   14940 kubeadm.go:406] StartCluster complete in 16.5768716s
	I1226 23:21:05.902722   14940 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 23:21:05.902781   14940 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:21:05.904208   14940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 23:21:05.906209   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1226 23:21:05.906315   14940 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1226 23:21:05.910078   14940 out.go:177] * Enabled addons: 
	I1226 23:21:05.906904   14940 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:21:05.912508   14940 addons.go:508] enable addons completed in 6.1507ms: enabled=[]
	I1226 23:21:05.922440   14940 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:21:05.923103   14940 kapi.go:59] client config for multinode-455300: &rest.Config{Host:"https://172.21.182.57:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 23:21:05.924968   14940 cert_rotation.go:137] Starting client certificate rotation controller
	I1226 23:21:05.925304   14940 round_trippers.go:463] GET https://172.21.182.57:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1226 23:21:05.925304   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:05.925362   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:05.925362   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:05.940689   14940 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1226 23:21:05.941041   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:05.941041   14940 round_trippers.go:580]     Audit-Id: e506e7e0-7c30-4a4c-ac0d-436e4cd19261
	I1226 23:21:05.941041   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:05.941041   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:05.941109   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:05.941143   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:05.941143   14940 round_trippers.go:580]     Content-Length: 292
	I1226 23:21:05.941143   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:05 GMT
	I1226 23:21:05.941179   14940 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d040dd96-d104-4852-b930-38d82a1c4e71","resourceVersion":"1723","creationTimestamp":"2023-12-26T22:58:16Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1226 23:21:05.941361   14940 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-455300" context rescaled to 1 replicas
	I1226 23:21:05.941361   14940 start.go:223] Will wait 6m0s for node &{Name: IP:172.21.182.57 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1226 23:21:05.944904   14940 out.go:177] * Verifying Kubernetes components...
	I1226 23:21:05.960123   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 23:21:06.074331   14940 command_runner.go:130] > apiVersion: v1
	I1226 23:21:06.074331   14940 command_runner.go:130] > data:
	I1226 23:21:06.074613   14940 command_runner.go:130] >   Corefile: |
	I1226 23:21:06.074613   14940 command_runner.go:130] >     .:53 {
	I1226 23:21:06.074613   14940 command_runner.go:130] >         log
	I1226 23:21:06.074613   14940 command_runner.go:130] >         errors
	I1226 23:21:06.074613   14940 command_runner.go:130] >         health {
	I1226 23:21:06.074613   14940 command_runner.go:130] >            lameduck 5s
	I1226 23:21:06.074613   14940 command_runner.go:130] >         }
	I1226 23:21:06.074613   14940 command_runner.go:130] >         ready
	I1226 23:21:06.074613   14940 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1226 23:21:06.074698   14940 command_runner.go:130] >            pods insecure
	I1226 23:21:06.074698   14940 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1226 23:21:06.074698   14940 command_runner.go:130] >            ttl 30
	I1226 23:21:06.074698   14940 command_runner.go:130] >         }
	I1226 23:21:06.074698   14940 command_runner.go:130] >         prometheus :9153
	I1226 23:21:06.074698   14940 command_runner.go:130] >         hosts {
	I1226 23:21:06.074698   14940 command_runner.go:130] >            172.21.176.1 host.minikube.internal
	I1226 23:21:06.074698   14940 command_runner.go:130] >            fallthrough
	I1226 23:21:06.074767   14940 command_runner.go:130] >         }
	I1226 23:21:06.074767   14940 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1226 23:21:06.074767   14940 command_runner.go:130] >            max_concurrent 1000
	I1226 23:21:06.074767   14940 command_runner.go:130] >         }
	I1226 23:21:06.074767   14940 command_runner.go:130] >         cache 30
	I1226 23:21:06.074767   14940 command_runner.go:130] >         loop
	I1226 23:21:06.074767   14940 command_runner.go:130] >         reload
	I1226 23:21:06.074767   14940 command_runner.go:130] >         loadbalance
	I1226 23:21:06.074767   14940 command_runner.go:130] >     }
	I1226 23:21:06.074767   14940 command_runner.go:130] > kind: ConfigMap
	I1226 23:21:06.074767   14940 command_runner.go:130] > metadata:
	I1226 23:21:06.074767   14940 command_runner.go:130] >   creationTimestamp: "2023-12-26T22:58:16Z"
	I1226 23:21:06.074767   14940 command_runner.go:130] >   name: coredns
	I1226 23:21:06.074767   14940 command_runner.go:130] >   namespace: kube-system
	I1226 23:21:06.074767   14940 command_runner.go:130] >   resourceVersion: "401"
	I1226 23:21:06.074767   14940 command_runner.go:130] >   uid: d1f0a471-f150-4768-9d56-de6f75812b72
	I1226 23:21:06.078528   14940 node_ready.go:35] waiting up to 6m0s for node "multinode-455300" to be "Ready" ...
	I1226 23:21:06.078679   14940 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1226 23:21:06.086559   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:06.086559   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:06.086634   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:06.086634   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:06.091355   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:06.091529   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:06.091529   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:06.091529   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:06.091529   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:06 GMT
	I1226 23:21:06.091593   14940 round_trippers.go:580]     Audit-Id: 4e7b6543-be36-41f1-84b2-336f6eaa0c5e
	I1226 23:21:06.091593   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:06.091593   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:06.091951   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:06.583599   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:06.583599   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:06.583599   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:06.583599   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:06.589308   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:21:06.589308   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:06.589308   14940 round_trippers.go:580]     Audit-Id: 29b1d0ed-a056-451a-8910-bc172f7cd031
	I1226 23:21:06.589754   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:06.589754   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:06.589754   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:06.589813   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:06.589813   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:06 GMT
	I1226 23:21:06.590225   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:07.089347   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:07.089347   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:07.089347   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:07.089347   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:07.093984   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:07.093984   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:07.093984   14940 round_trippers.go:580]     Audit-Id: 322bc397-dc9b-4a17-81da-ab8b96a424f4
	I1226 23:21:07.093984   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:07.093984   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:07.093984   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:07.093984   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:07.093984   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:07 GMT
	I1226 23:21:07.095535   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:07.586319   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:07.586319   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:07.586319   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:07.586319   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:07.591318   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:07.592494   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:07.592494   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:07.592494   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:07.592494   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:07.592494   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:07.592494   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:07 GMT
	I1226 23:21:07.592494   14940 round_trippers.go:580]     Audit-Id: 5b68ab17-c513-433c-a48e-f95ee97e581d
	I1226 23:21:07.592494   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:08.090415   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:08.090415   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:08.090415   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:08.090415   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:08.095036   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:08.095036   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:08.095036   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:08.095036   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:08.095036   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:08.095036   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:08.095036   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:08 GMT
	I1226 23:21:08.095265   14940 round_trippers.go:580]     Audit-Id: cdd9a33c-2c99-45ad-b0fc-618d34699838
	I1226 23:21:08.095602   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:08.095664   14940 node_ready.go:58] node "multinode-455300" has status "Ready":"False"
	I1226 23:21:08.593915   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:08.593915   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:08.593915   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:08.593915   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:08.597517   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:08.597517   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:08.597517   14940 round_trippers.go:580]     Audit-Id: f15ce175-a97a-4089-8a94-ad8f621481c1
	I1226 23:21:08.597517   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:08.597517   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:08.597517   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:08.597517   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:08.597517   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:08 GMT
	I1226 23:21:08.597517   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:09.084711   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:09.084711   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:09.084711   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:09.084711   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:09.089317   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:09.089317   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:09.089317   14940 round_trippers.go:580]     Audit-Id: f716392b-3a8f-4385-b8d3-2b83cb0facae
	I1226 23:21:09.089733   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:09.089733   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:09.089733   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:09.089796   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:09.089796   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:09 GMT
	I1226 23:21:09.089796   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:09.588518   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:09.588518   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:09.588596   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:09.588596   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:09.593477   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:09.593477   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:09.593599   14940 round_trippers.go:580]     Audit-Id: 4ba6c07b-37db-48a3-ae1e-34178f0bfecf
	I1226 23:21:09.593599   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:09.593599   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:09.593599   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:09.593599   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:09.593599   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:09 GMT
	I1226 23:21:09.593756   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:10.090170   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:10.090238   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:10.090238   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:10.090367   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:10.094081   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:10.094081   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:10.094081   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:10.094081   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:10.094081   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:10 GMT
	I1226 23:21:10.094081   14940 round_trippers.go:580]     Audit-Id: 511fb29e-2052-48ea-b880-2f503b6c62e4
	I1226 23:21:10.094081   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:10.094081   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:10.095306   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:10.095879   14940 node_ready.go:58] node "multinode-455300" has status "Ready":"False"
	I1226 23:21:10.591707   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:10.591825   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:10.591825   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:10.591825   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:10.596204   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:10.596204   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:10.596204   14940 round_trippers.go:580]     Audit-Id: 8a53d031-0410-4ecd-b494-142c0cdd03ee
	I1226 23:21:10.596204   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:10.596204   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:10.596204   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:10.596204   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:10.596204   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:10 GMT
	I1226 23:21:10.596764   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:11.092170   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:11.092294   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:11.092294   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:11.092294   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:11.099716   14940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 23:21:11.099716   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:11.099716   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:11.099716   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:11.099716   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:11 GMT
	I1226 23:21:11.099716   14940 round_trippers.go:580]     Audit-Id: a8ee693b-18e6-4ed6-aa10-81db026f542c
	I1226 23:21:11.099716   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:11.099716   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:11.101361   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:11.593979   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:11.593979   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:11.593979   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:11.593979   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:11.599035   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:11.599098   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:11.599098   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:11.599098   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:11.599098   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:11.599098   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:11 GMT
	I1226 23:21:11.599098   14940 round_trippers.go:580]     Audit-Id: 4bdb4fbc-7a79-4e68-81b4-2899c1974fcd
	I1226 23:21:11.599098   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:11.599098   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:12.079474   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:12.079572   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:12.079572   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:12.079572   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:12.082984   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:12.082984   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:12.083528   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:12.083528   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:12.083528   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:12.083582   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:12 GMT
	I1226 23:21:12.083582   14940 round_trippers.go:580]     Audit-Id: 143ef602-5804-48f2-88bf-27e705422d9a
	I1226 23:21:12.083582   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:12.083582   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:12.582393   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:12.582393   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:12.582393   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:12.582393   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:12.669384   14940 round_trippers.go:574] Response Status: 200 OK in 86 milliseconds
	I1226 23:21:12.669384   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:12.669384   14940 round_trippers.go:580]     Audit-Id: edc82952-7c8e-4222-9877-223c8b2dc5e5
	I1226 23:21:12.669384   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:12.669384   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:12.670109   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:12.670109   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:12.670109   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:12 GMT
	I1226 23:21:12.678397   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1813","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I1226 23:21:12.679231   14940 node_ready.go:58] node "multinode-455300" has status "Ready":"False"
	I1226 23:21:13.090046   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:13.090046   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:13.090141   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:13.090141   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:13.093490   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:13.093490   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:13.093490   14940 round_trippers.go:580]     Audit-Id: 99222aea-dbfc-40ec-8a31-fad884b191f8
	I1226 23:21:13.093490   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:13.094463   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:13.094463   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:13.094511   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:13.094511   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:13 GMT
	I1226 23:21:13.095254   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:13.095793   14940 node_ready.go:49] node "multinode-455300" has status "Ready":"True"
	I1226 23:21:13.095793   14940 node_ready.go:38] duration metric: took 7.0172664s waiting for node "multinode-455300" to be "Ready" ...
	I1226 23:21:13.095793   14940 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 23:21:13.095793   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods
	I1226 23:21:13.095793   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:13.095793   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:13.095793   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:13.102410   14940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 23:21:13.102410   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:13.102410   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:13 GMT
	I1226 23:21:13.102410   14940 round_trippers.go:580]     Audit-Id: f776e3dc-84a6-4b14-9072-3a4672978898
	I1226 23:21:13.102410   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:13.102410   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:13.102410   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:13.103021   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:13.105564   14940 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1842"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82829 chars]
	I1226 23:21:13.111959   14940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:13.111959   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:13.111959   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:13.111959   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:13.111959   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:13.117181   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:21:13.117181   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:13.117181   14940 round_trippers.go:580]     Audit-Id: 140e1818-106a-4c20-9a24-3ada8f4c08da
	I1226 23:21:13.117181   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:13.117181   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:13.117181   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:13.117181   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:13.117181   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:13 GMT
	I1226 23:21:13.117181   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:13.118404   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:13.118404   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:13.118404   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:13.118404   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:13.121852   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:13.121852   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:13.121962   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:13.121962   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:13.121962   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:13 GMT
	I1226 23:21:13.121962   14940 round_trippers.go:580]     Audit-Id: 15a79763-e89e-4c7f-b4aa-7c227a2ddb98
	I1226 23:21:13.121962   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:13.121962   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:13.122036   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:13.625884   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:13.625884   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:13.625884   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:13.625884   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:13.629941   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:13.629941   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:13.629941   14940 round_trippers.go:580]     Audit-Id: 04c076f0-250b-417a-8af6-409b73c2e5d1
	I1226 23:21:13.629941   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:13.629941   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:13.629941   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:13.629941   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:13.629941   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:13 GMT
	I1226 23:21:13.629941   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:13.631502   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:13.631555   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:13.631591   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:13.631620   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:13.634591   14940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:21:13.634863   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:13.634863   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:13.634863   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:13.634863   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:13 GMT
	I1226 23:21:13.634863   14940 round_trippers.go:580]     Audit-Id: b687506b-0cb6-4a94-bcb2-7522a0bcdf22
	I1226 23:21:13.634863   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:13.634863   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:13.634863   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:14.113331   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:14.113450   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:14.113450   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:14.113450   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:14.118856   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:21:14.119359   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:14.119359   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:14.119359   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:14 GMT
	I1226 23:21:14.119359   14940 round_trippers.go:580]     Audit-Id: bebffbf2-f9b8-478e-a58b-e9afc0d64b83
	I1226 23:21:14.119359   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:14.119359   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:14.119359   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:14.119359   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:14.120424   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:14.120424   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:14.120424   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:14.120424   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:14.124581   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:14.124581   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:14.124581   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:14.124733   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:14.124733   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:14.124733   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:14 GMT
	I1226 23:21:14.124733   14940 round_trippers.go:580]     Audit-Id: b27154e7-93d4-4696-bf13-c3a6a0ee9af5
	I1226 23:21:14.124733   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:14.125085   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:14.628006   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:14.628006   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:14.628006   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:14.628006   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:14.634892   14940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 23:21:14.634892   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:14.634892   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:14.634892   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:14.634892   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:14 GMT
	I1226 23:21:14.634892   14940 round_trippers.go:580]     Audit-Id: 19132b50-b5b1-4598-ab83-13bdb6531726
	I1226 23:21:14.634892   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:14.634892   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:14.635633   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:14.636250   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:14.636365   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:14.636365   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:14.636448   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:14.639858   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:14.639858   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:14.639858   14940 round_trippers.go:580]     Audit-Id: 4feeec5c-bfde-4880-bbb3-beb504b4e92e
	I1226 23:21:14.639858   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:14.639858   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:14.639858   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:14.639858   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:14.639858   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:14 GMT
	I1226 23:21:14.640338   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:15.125755   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:15.125872   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:15.125872   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:15.125872   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:15.130292   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:15.130525   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:15.130525   14940 round_trippers.go:580]     Audit-Id: f711eb63-786d-41fd-9469-5ba682053a59
	I1226 23:21:15.130525   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:15.130525   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:15.130618   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:15.130618   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:15.130618   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:15 GMT
	I1226 23:21:15.131020   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:15.131800   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:15.131800   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:15.131800   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:15.131800   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:15.135175   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:15.135175   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:15.135175   14940 round_trippers.go:580]     Audit-Id: 7b2a82a1-853d-4acf-8e2e-867a3b8179db
	I1226 23:21:15.135175   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:15.135175   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:15.135175   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:15.135175   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:15.135175   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:15 GMT
	I1226 23:21:15.136000   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:15.136448   14940 pod_ready.go:102] pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace has status "Ready":"False"
	I1226 23:21:15.627227   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:15.627312   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:15.627312   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:15.627312   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:15.634357   14940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 23:21:15.634357   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:15.634357   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:15.634357   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:15.634357   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:15.634357   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:15 GMT
	I1226 23:21:15.634357   14940 round_trippers.go:580]     Audit-Id: 36bef6db-9e7a-4eb7-a698-1f9a76699551
	I1226 23:21:15.634357   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:15.634906   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:15.635183   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:15.635183   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:15.635183   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:15.635183   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:15.638833   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:15.638833   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:15.638833   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:15 GMT
	I1226 23:21:15.638833   14940 round_trippers.go:580]     Audit-Id: 16f3834b-7673-4e15-a342-97c762c29630
	I1226 23:21:15.638833   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:15.638833   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:15.639753   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:15.639753   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:15.639826   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:16.125877   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:16.125993   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:16.125993   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:16.126081   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:16.130483   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:16.130483   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:16.130483   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:16.130483   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:16.130483   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:16 GMT
	I1226 23:21:16.130483   14940 round_trippers.go:580]     Audit-Id: 8e6e104a-3182-4da3-a91c-74a0d7ffed6a
	I1226 23:21:16.130483   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:16.130483   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:16.130978   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:16.131954   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:16.132031   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:16.132031   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:16.132031   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:16.135422   14940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:21:16.135469   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:16.135469   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:16.135469   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:16 GMT
	I1226 23:21:16.135469   14940 round_trippers.go:580]     Audit-Id: 61746991-24ad-4d14-aecf-a0072794d2c6
	I1226 23:21:16.135469   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:16.135469   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:16.135469   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:16.135568   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:16.626726   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:16.626807   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:16.626872   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:16.626872   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:16.636320   14940 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1226 23:21:16.636320   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:16.636320   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:16 GMT
	I1226 23:21:16.636320   14940 round_trippers.go:580]     Audit-Id: a859e65d-4e18-4d24-a560-6227f0f7f5cd
	I1226 23:21:16.636320   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:16.636320   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:16.636320   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:16.636494   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:16.636926   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:16.637686   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:16.637747   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:16.637747   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:16.637747   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:16.641028   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:16.641028   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:16.641028   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:16 GMT
	I1226 23:21:16.641028   14940 round_trippers.go:580]     Audit-Id: 8e4b7dbe-5955-4026-ad2d-7ed580c5ef9a
	I1226 23:21:16.641028   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:16.641028   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:16.641991   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:16.641991   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:16.642441   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:17.115574   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:17.115704   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:17.115704   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:17.115759   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:17.119166   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:17.119166   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:17.119166   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:17 GMT
	I1226 23:21:17.119166   14940 round_trippers.go:580]     Audit-Id: 22c614e7-b5f2-4f06-85bf-43b89b49e89d
	I1226 23:21:17.120095   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:17.120095   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:17.120095   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:17.120095   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:17.120429   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:17.121120   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:17.121120   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:17.121120   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:17.121120   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:17.129378   14940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1226 23:21:17.129378   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:17.129378   14940 round_trippers.go:580]     Audit-Id: b9d461a6-7d3e-4dd2-8824-405200409c9f
	I1226 23:21:17.129378   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:17.129378   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:17.129378   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:17.129378   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:17.129378   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:17 GMT
	I1226 23:21:17.129378   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:17.621976   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:17.621976   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:17.621976   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:17.621976   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:17.627660   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:21:17.627711   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:17.627711   14940 round_trippers.go:580]     Audit-Id: ed024c82-4969-47e9-98f7-33efbb01712e
	I1226 23:21:17.627711   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:17.627711   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:17.627791   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:17.627791   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:17.627791   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:17 GMT
	I1226 23:21:17.628016   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:17.628420   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:17.628420   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:17.628420   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:17.628420   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:17.632005   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:17.632005   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:17.632005   14940 round_trippers.go:580]     Audit-Id: 66ca5c11-0ce1-411b-9910-9e657f544e40
	I1226 23:21:17.632005   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:17.632005   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:17.632005   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:17.632005   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:17.632005   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:17 GMT
	I1226 23:21:17.632005   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:17.632985   14940 pod_ready.go:102] pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace has status "Ready":"False"
	I1226 23:21:18.115446   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:18.115446   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:18.115446   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:18.115446   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:18.118143   14940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:21:18.119184   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:18.119184   14940 round_trippers.go:580]     Audit-Id: 4c2bcdd1-7927-402d-8b4d-cc7f032e456a
	I1226 23:21:18.119184   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:18.119184   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:18.119269   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:18.119269   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:18.119269   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:18 GMT
	I1226 23:21:18.119547   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:18.119857   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:18.119857   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:18.119857   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:18.119857   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:18.123546   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:18.123546   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:18.124384   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:18 GMT
	I1226 23:21:18.124384   14940 round_trippers.go:580]     Audit-Id: 4c87edfa-94fa-49cc-95bd-f378ff7f3ac3
	I1226 23:21:18.124384   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:18.124384   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:18.124384   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:18.124384   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:18.124747   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:21:18.616690   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:18.616690   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:18.616690   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:18.616690   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:18.623729   14940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 23:21:18.623729   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:18.623729   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:18.624582   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:18.624582   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:18 GMT
	I1226 23:21:18.624582   14940 round_trippers.go:580]     Audit-Id: 9b6dc645-422d-454b-959b-4b5af5c1510b
	I1226 23:21:18.624582   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:18.624582   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:18.624717   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:18.625698   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:18.625698   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:18.625698   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:18.625698   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:18.628196   14940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:21:18.628196   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:18.628196   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:18 GMT
	I1226 23:21:18.628196   14940 round_trippers.go:580]     Audit-Id: 948e5520-4b13-4c71-991b-125c76d52409
	I1226 23:21:18.628196   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:18.628196   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:18.628196   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:18.628196   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:18.628196   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:21:19.121073   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:19.121073   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.121170   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.121170   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.126515   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:21:19.126872   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.126872   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.126872   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.126872   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.126872   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.126872   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.126872   14940 round_trippers.go:580]     Audit-Id: 3d17edbd-97d5-417f-9adb-b27a4e02f8a2
	I1226 23:21:19.127308   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1863","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I1226 23:21:19.128098   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:19.128098   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.128170   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.128170   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.134753   14940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 23:21:19.134846   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.134846   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.134846   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.134846   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.134846   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.134846   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.134911   14940 round_trippers.go:580]     Audit-Id: 5ff636e5-1579-41de-a3c4-a817fec59187
	I1226 23:21:19.134911   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:21:19.135542   14940 pod_ready.go:92] pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:19.135542   14940 pod_ready.go:81] duration metric: took 6.0235835s waiting for pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.135542   14940 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.135542   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-455300
	I1226 23:21:19.135542   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.135542   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.135542   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.138521   14940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:21:19.138521   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.138521   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.138521   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.138521   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.138521   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.138521   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.138521   14940 round_trippers.go:580]     Audit-Id: d996b836-b3cd-4cb5-b240-2c4f3f199630
	I1226 23:21:19.139582   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-455300","namespace":"kube-system","uid":"cfd4d580-b2a8-4ff1-8d3c-c6b50f9bf86e","resourceVersion":"1834","creationTimestamp":"2023-12-26T23:21:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.21.182.57:2379","kubernetes.io/config.hash":"d4e365efddd1bd5d58716ef3ab8705bc","kubernetes.io/config.mirror":"d4e365efddd1bd5d58716ef3ab8705bc","kubernetes.io/config.seen":"2023-12-26T23:20:52.614240428Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:21:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I1226 23:21:19.139582   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:19.139582   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.139582   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.139582   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.143808   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:19.143808   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.144146   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.144146   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.144146   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.144146   14940 round_trippers.go:580]     Audit-Id: 9fb21395-67bb-46a7-b969-31e6d9bdc713
	I1226 23:21:19.144146   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.144146   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.144507   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:21:19.144895   14940 pod_ready.go:92] pod "etcd-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:19.144961   14940 pod_ready.go:81] duration metric: took 9.3534ms waiting for pod "etcd-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.144961   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.144961   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-455300
	I1226 23:21:19.144961   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.144961   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.144961   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.147548   14940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:21:19.147548   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.147548   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.147548   14940 round_trippers.go:580]     Audit-Id: 10708729-c7e5-44ee-9b82-d6357960d787
	I1226 23:21:19.147548   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.147548   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.147548   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.147548   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.147548   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-455300","namespace":"kube-system","uid":"bbe5516b-f745-4a20-8df3-3cd3ac15d7f6","resourceVersion":"1836","creationTimestamp":"2023-12-26T23:21:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.21.182.57:8443","kubernetes.io/config.hash":"3c3021eb66c7ec14c35bcec06843a329","kubernetes.io/config.mirror":"3c3021eb66c7ec14c35bcec06843a329","kubernetes.io/config.seen":"2023-12-26T23:20:52.614245928Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:21:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I1226 23:21:19.148946   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:19.148946   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.148946   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.148946   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.152204   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:19.153176   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.153176   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.153176   14940 round_trippers.go:580]     Audit-Id: 3b603720-1635-435a-a1c5-ecbd31ad5b11
	I1226 23:21:19.153176   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.153235   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.153235   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.153235   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.153467   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:21:19.153684   14940 pod_ready.go:92] pod "kube-apiserver-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:19.153684   14940 pod_ready.go:81] duration metric: took 8.7229ms waiting for pod "kube-apiserver-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.153684   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.153684   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-455300
	I1226 23:21:19.153684   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.153684   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.153684   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.158367   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:19.158367   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.158367   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.158367   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.158367   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.158367   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.158367   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.158367   14940 round_trippers.go:580]     Audit-Id: af9ebf6c-96b7-4798-acd6-7cbc81a1a34f
	I1226 23:21:19.158367   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-455300","namespace":"kube-system","uid":"fdaf236b-e792-4278-908c-34b337b97beb","resourceVersion":"1844","creationTimestamp":"2023-12-26T22:58:13Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"390e7b7338e932006306396347e13bcf","kubernetes.io/config.mirror":"390e7b7338e932006306396347e13bcf","kubernetes.io/config.seen":"2023-12-26T22:58:06.456140564Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I1226 23:21:19.159878   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:19.159878   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.159878   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.159878   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.163473   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:19.163473   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.163473   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.163473   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.163473   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.163473   14940 round_trippers.go:580]     Audit-Id: c1de5a6c-0c5c-426d-a579-793f33575fea
	I1226 23:21:19.163473   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.163613   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.163895   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:21:19.164395   14940 pod_ready.go:92] pod "kube-controller-manager-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:19.164482   14940 pod_ready.go:81] duration metric: took 10.7779ms waiting for pod "kube-controller-manager-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.164482   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2pfcl" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.164544   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pfcl
	I1226 23:21:19.164641   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.164641   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.164717   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.168470   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:19.169451   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.169451   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.169521   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.169521   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.169521   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.169521   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.169521   14940 round_trippers.go:580]     Audit-Id: a075c014-6fe1-489a-b4e7-c4acaaf3ae97
	I1226 23:21:19.169712   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2pfcl","generateName":"kube-proxy-","namespace":"kube-system","uid":"61b5d2fb-802c-4b84-b7fa-7a7e9e024028","resourceVersion":"1631","creationTimestamp":"2023-12-26T23:06:10Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:06:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I1226 23:21:19.170585   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m03
	I1226 23:21:19.170585   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.170585   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.170585   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.173913   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:19.174032   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.174032   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.174032   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.174032   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.174105   14940 round_trippers.go:580]     Audit-Id: 47b46aeb-048c-437f-8b98-514bf85dc611
	I1226 23:21:19.174105   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.174105   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.175196   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m03","uid":"ef364efe-5dc7-4fb4-bc7c-76a3eaa41ba4","resourceVersion":"1649","creationTimestamp":"2023-12-26T23:16:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_16_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:16:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3636 chars]
	I1226 23:21:19.175196   14940 pod_ready.go:92] pod "kube-proxy-2pfcl" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:19.175733   14940 pod_ready.go:81] duration metric: took 11.2513ms waiting for pod "kube-proxy-2pfcl" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.175793   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqlf8" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.325839   14940 request.go:629] Waited for 149.5919ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqlf8
	I1226 23:21:19.325959   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqlf8
	I1226 23:21:19.325959   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.325959   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.325959   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.329564   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:19.330605   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.330624   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.330624   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.330624   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.330624   14940 round_trippers.go:580]     Audit-Id: eed0cb80-790c-4792-8b57-2ac8fe578101
	I1226 23:21:19.330706   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.330706   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.330706   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bqlf8","generateName":"kube-proxy-","namespace":"kube-system","uid":"1caff24c-909f-42a9-a4b8-d9c8c1ec8828","resourceVersion":"635","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I1226 23:21:19.530123   14940 request.go:629] Waited for 198.2616ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:21:19.530316   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:21:19.530316   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.530316   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.530316   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.533764   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:19.533764   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.533764   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.533764   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.533764   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.533764   14940 round_trippers.go:580]     Audit-Id: f7d5ba08-54d3-413f-8320-5827ef4a6f89
	I1226 23:21:19.533764   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.533764   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.534846   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"1620","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_16_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3819 chars]
	I1226 23:21:19.535462   14940 pod_ready.go:92] pod "kube-proxy-bqlf8" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:19.535462   14940 pod_ready.go:81] duration metric: took 359.6695ms waiting for pod "kube-proxy-bqlf8" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.535462   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hzcqb" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.733138   14940 request.go:629] Waited for 197.3425ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzcqb
	I1226 23:21:19.733223   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzcqb
	I1226 23:21:19.733223   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.733406   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.733406   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.738168   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:19.738168   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.738168   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.738168   14940 round_trippers.go:580]     Audit-Id: 64d66e5c-98cf-4bf7-9eaa-bf91661f49ea
	I1226 23:21:19.738168   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.738168   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.738168   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.738168   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.738544   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hzcqb","generateName":"kube-proxy-","namespace":"kube-system","uid":"0027fd42-fa64-4d1d-acc8-36e7b41e4838","resourceVersion":"1829","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I1226 23:21:19.922460   14940 request.go:629] Waited for 183.3009ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:19.922537   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:19.922537   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.922537   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.922537   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.927013   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:19.927013   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.927013   14940 round_trippers.go:580]     Audit-Id: 86aa1e31-c14a-408d-91cc-17453863c8b0
	I1226 23:21:19.927013   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.927695   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.927695   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.927695   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.927695   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.929418   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:21:19.929953   14940 pod_ready.go:92] pod "kube-proxy-hzcqb" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:19.929953   14940 pod_ready.go:81] duration metric: took 394.4913ms waiting for pod "kube-proxy-hzcqb" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.929953   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:20.126595   14940 request.go:629] Waited for 196.4678ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-455300
	I1226 23:21:20.127042   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-455300
	I1226 23:21:20.127042   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:20.127042   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:20.127186   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:20.131631   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:20.131631   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:20.131631   14940 round_trippers.go:580]     Audit-Id: a29fb9c6-d461-4e1a-a02c-9e4c35cc878d
	I1226 23:21:20.131631   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:20.131631   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:20.131631   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:20.131631   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:20.131631   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:20 GMT
	I1226 23:21:20.132523   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-455300","namespace":"kube-system","uid":"58252b0c-41ef-43ab-b2e8-4bd2a1b21cb1","resourceVersion":"1839","creationTimestamp":"2023-12-26T22:58:17Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ebadb7d5fc522150603669ea98264147","kubernetes.io/config.mirror":"ebadb7d5fc522150603669ea98264147","kubernetes.io/config.seen":"2023-12-26T22:58:16.785831210Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I1226 23:21:20.331638   14940 request.go:629] Waited for 197.7397ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:20.331780   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:20.332006   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:20.332006   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:20.332006   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:20.336629   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:20.336629   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:20.336629   14940 round_trippers.go:580]     Audit-Id: 2d4cdfb6-4b90-4ccd-9dac-3a6de5a86383
	I1226 23:21:20.336629   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:20.336629   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:20.336629   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:20.336629   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:20.336629   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:20 GMT
	I1226 23:21:20.337450   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:21:20.337987   14940 pod_ready.go:92] pod "kube-scheduler-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:20.337987   14940 pod_ready.go:81] duration metric: took 408.0337ms waiting for pod "kube-scheduler-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:20.337987   14940 pod_ready.go:38] duration metric: took 7.2421951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 23:21:20.338126   14940 api_server.go:52] waiting for apiserver process to appear ...
	I1226 23:21:20.352056   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:21:20.375522   14940 command_runner.go:130] > 1851
	I1226 23:21:20.375652   14940 api_server.go:72] duration metric: took 14.4342941s to wait for apiserver process to appear ...
	I1226 23:21:20.375652   14940 api_server.go:88] waiting for apiserver healthz status ...
	I1226 23:21:20.375717   14940 api_server.go:253] Checking apiserver healthz at https://172.21.182.57:8443/healthz ...
	I1226 23:21:20.385153   14940 api_server.go:279] https://172.21.182.57:8443/healthz returned 200:
	ok
	I1226 23:21:20.386338   14940 round_trippers.go:463] GET https://172.21.182.57:8443/version
	I1226 23:21:20.386338   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:20.386338   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:20.386404   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:20.388029   14940 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 23:21:20.388029   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:20.388397   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:20.388397   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:20.388397   14940 round_trippers.go:580]     Content-Length: 264
	I1226 23:21:20.388397   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:20 GMT
	I1226 23:21:20.388397   14940 round_trippers.go:580]     Audit-Id: 31aa4770-6938-4f49-87d9-530811e84a58
	I1226 23:21:20.388397   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:20.388397   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:20.388506   14940 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1226 23:21:20.388599   14940 api_server.go:141] control plane version: v1.28.4
	I1226 23:21:20.388599   14940 api_server.go:131] duration metric: took 12.9468ms to wait for apiserver health ...
	I1226 23:21:20.388599   14940 system_pods.go:43] waiting for kube-system pods to appear ...
	I1226 23:21:20.536020   14940 request.go:629] Waited for 147.1195ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods
	I1226 23:21:20.536020   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods
	I1226 23:21:20.536232   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:20.536232   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:20.536286   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:20.546869   14940 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1226 23:21:20.546869   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:20.546869   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:20 GMT
	I1226 23:21:20.546869   14940 round_trippers.go:580]     Audit-Id: 716bc06f-3023-4990-aa24-228f200b4431
	I1226 23:21:20.546869   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:20.546869   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:20.546869   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:20.546869   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:20.549541   14940 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1869"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1863","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82515 chars]
	I1226 23:21:20.553624   14940 system_pods.go:59] 12 kube-system pods found
	I1226 23:21:20.553698   14940 system_pods.go:61] "coredns-5dd5756b68-fj9bd" [fbc5229e-2af2-4e17-b23c-ebf836a42aa2] Running
	I1226 23:21:20.553698   14940 system_pods.go:61] "etcd-multinode-455300" [cfd4d580-b2a8-4ff1-8d3c-c6b50f9bf86e] Running
	I1226 23:21:20.553698   14940 system_pods.go:61] "kindnet-8jsvj" [376eb267-ce7d-4497-a85e-ff9224a25347] Running
	I1226 23:21:20.553698   14940 system_pods.go:61] "kindnet-zt55b" [43604859-483f-4e92-a16c-d3f30cb6e4f1] Running
	I1226 23:21:20.553698   14940 system_pods.go:61] "kindnet-zxd45" [686e296b-23ae-4a1e-bc14-2dea164b0c29] Running
	I1226 23:21:20.553698   14940 system_pods.go:61] "kube-apiserver-multinode-455300" [bbe5516b-f745-4a20-8df3-3cd3ac15d7f6] Running
	I1226 23:21:20.553698   14940 system_pods.go:61] "kube-controller-manager-multinode-455300" [fdaf236b-e792-4278-908c-34b337b97beb] Running
	I1226 23:21:20.553774   14940 system_pods.go:61] "kube-proxy-2pfcl" [61b5d2fb-802c-4b84-b7fa-7a7e9e024028] Running
	I1226 23:21:20.553774   14940 system_pods.go:61] "kube-proxy-bqlf8" [1caff24c-909f-42a9-a4b8-d9c8c1ec8828] Running
	I1226 23:21:20.553774   14940 system_pods.go:61] "kube-proxy-hzcqb" [0027fd42-fa64-4d1d-acc8-36e7b41e4838] Running
	I1226 23:21:20.553774   14940 system_pods.go:61] "kube-scheduler-multinode-455300" [58252b0c-41ef-43ab-b2e8-4bd2a1b21cb1] Running
	I1226 23:21:20.553774   14940 system_pods.go:61] "storage-provisioner" [e274f19d-1940-400d-b887-aaf390e64fdd] Running
	I1226 23:21:20.553774   14940 system_pods.go:74] duration metric: took 165.1747ms to wait for pod list to return data ...
	I1226 23:21:20.553774   14940 default_sa.go:34] waiting for default service account to be created ...
	I1226 23:21:20.737346   14940 request.go:629] Waited for 183.354ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/default/serviceaccounts
	I1226 23:21:20.737346   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/default/serviceaccounts
	I1226 23:21:20.737346   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:20.737346   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:20.737346   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:20.741936   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:20.742650   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:20.742650   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:20.742650   14940 round_trippers.go:580]     Content-Length: 262
	I1226 23:21:20.742650   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:20 GMT
	I1226 23:21:20.742650   14940 round_trippers.go:580]     Audit-Id: c858c45b-5f76-4d05-98a5-2322b7682e59
	I1226 23:21:20.742650   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:20.742727   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:20.742745   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:20.742745   14940 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1869"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"52815640-9603-4e59-b38b-e19ec6f4b307","resourceVersion":"349","creationTimestamp":"2023-12-26T22:58:29Z"}}]}
	I1226 23:21:20.743161   14940 default_sa.go:45] found service account: "default"
	I1226 23:21:20.743161   14940 default_sa.go:55] duration metric: took 189.3873ms for default service account to be created ...
	I1226 23:21:20.743240   14940 system_pods.go:116] waiting for k8s-apps to be running ...
	I1226 23:21:20.923564   14940 request.go:629] Waited for 180.2149ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods
	I1226 23:21:20.923564   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods
	I1226 23:21:20.923564   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:20.923564   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:20.923564   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:20.934050   14940 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1226 23:21:20.934050   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:20.934050   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:20.934050   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:20 GMT
	I1226 23:21:20.934050   14940 round_trippers.go:580]     Audit-Id: efc9f108-7827-4ab4-b998-a27e99ed68ad
	I1226 23:21:20.934836   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:20.934836   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:20.934836   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:20.937531   14940 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1869"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1863","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82515 chars]
	I1226 23:21:20.941584   14940 system_pods.go:86] 12 kube-system pods found
	I1226 23:21:20.941675   14940 system_pods.go:89] "coredns-5dd5756b68-fj9bd" [fbc5229e-2af2-4e17-b23c-ebf836a42aa2] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "etcd-multinode-455300" [cfd4d580-b2a8-4ff1-8d3c-c6b50f9bf86e] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kindnet-8jsvj" [376eb267-ce7d-4497-a85e-ff9224a25347] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kindnet-zt55b" [43604859-483f-4e92-a16c-d3f30cb6e4f1] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kindnet-zxd45" [686e296b-23ae-4a1e-bc14-2dea164b0c29] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kube-apiserver-multinode-455300" [bbe5516b-f745-4a20-8df3-3cd3ac15d7f6] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kube-controller-manager-multinode-455300" [fdaf236b-e792-4278-908c-34b337b97beb] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kube-proxy-2pfcl" [61b5d2fb-802c-4b84-b7fa-7a7e9e024028] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kube-proxy-bqlf8" [1caff24c-909f-42a9-a4b8-d9c8c1ec8828] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kube-proxy-hzcqb" [0027fd42-fa64-4d1d-acc8-36e7b41e4838] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kube-scheduler-multinode-455300" [58252b0c-41ef-43ab-b2e8-4bd2a1b21cb1] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "storage-provisioner" [e274f19d-1940-400d-b887-aaf390e64fdd] Running
	I1226 23:21:20.941831   14940 system_pods.go:126] duration metric: took 198.5904ms to wait for k8s-apps to be running ...
	I1226 23:21:20.941831   14940 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 23:21:20.954560   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 23:21:20.976564   14940 system_svc.go:56] duration metric: took 33.7191ms WaitForService to wait for kubelet.
	I1226 23:21:20.976564   14940 kubeadm.go:581] duration metric: took 15.0352054s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 23:21:20.976564   14940 node_conditions.go:102] verifying NodePressure condition ...
	I1226 23:21:21.127064   14940 request.go:629] Waited for 150.3744ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes
	I1226 23:21:21.127288   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes
	I1226 23:21:21.127376   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:21.127442   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:21.127486   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:21.134080   14940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 23:21:21.134080   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:21.134080   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:21 GMT
	I1226 23:21:21.134080   14940 round_trippers.go:580]     Audit-Id: f3485c07-5450-4c04-bd6d-b43e51c0d330
	I1226 23:21:21.134080   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:21.134080   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:21.134080   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:21.134080   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:21.134601   14940 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1869"},"items":[{"metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14730 chars]
	I1226 23:21:21.135549   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:21:21.135549   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:21:21.135549   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:21:21.135549   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:21:21.135645   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:21:21.135645   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:21:21.135645   14940 node_conditions.go:105] duration metric: took 159.0819ms to run NodePressure ...
	I1226 23:21:21.135645   14940 start.go:228] waiting for startup goroutines ...
	I1226 23:21:21.135645   14940 start.go:233] waiting for cluster config update ...
	I1226 23:21:21.135645   14940 start.go:242] writing updated cluster config ...
	I1226 23:21:21.149762   14940 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:21:21.149854   14940 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 23:21:21.157485   14940 out.go:177] * Starting worker node multinode-455300-m02 in cluster multinode-455300
	I1226 23:21:21.161093   14940 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 23:21:21.161093   14940 cache.go:56] Caching tarball of preloaded images
	I1226 23:21:21.162151   14940 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1226 23:21:21.162151   14940 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1226 23:21:21.162151   14940 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 23:21:21.164302   14940 start.go:365] acquiring machines lock for multinode-455300-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1226 23:21:21.164302   14940 start.go:369] acquired machines lock for "multinode-455300-m02" in 0s
	I1226 23:21:21.165477   14940 start.go:96] Skipping create...Using existing machine configuration
	I1226 23:21:21.165477   14940 fix.go:54] fixHost starting: m02
	I1226 23:21:21.166273   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:21:23.308632   14940 main.go:141] libmachine: [stdout =====>] : Off
	
	I1226 23:21:23.308632   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:23.308632   14940 fix.go:102] recreateIfNeeded on multinode-455300-m02: state=Stopped err=<nil>
	W1226 23:21:23.308632   14940 fix.go:128] unexpected machine state, will restart: <nil>
	I1226 23:21:23.315224   14940 out.go:177] * Restarting existing hyperv VM for "multinode-455300-m02" ...
	I1226 23:21:23.318110   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-455300-m02
	I1226 23:21:26.483294   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:21:26.483294   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:26.483294   14940 main.go:141] libmachine: Waiting for host to start...
	I1226 23:21:26.483294   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:21:28.842509   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:21:28.842509   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:28.842509   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:21:31.464432   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:21:31.464470   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:32.465963   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:21:34.710514   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:21:34.710514   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:34.710626   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:21:37.308610   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:21:37.308661   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:38.309334   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:21:40.568709   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:21:40.568709   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:40.568709   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:21:43.159797   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:21:43.159797   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:44.174687   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:21:46.465398   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:21:46.465611   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:46.465611   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:21:49.091695   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:21:49.092056   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:50.094960   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:21:52.330822   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:21:52.331032   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:52.331115   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:21:54.992446   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:21:54.992446   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:54.995708   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:21:57.194393   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:21:57.194393   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:57.194478   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:21:59.802903   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:21:59.803109   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:59.803244   14940 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 23:21:59.806570   14940 machine.go:88] provisioning docker machine ...
	I1226 23:21:59.806651   14940 buildroot.go:166] provisioning hostname "multinode-455300-m02"
	I1226 23:21:59.806651   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:02.045823   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:02.045823   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:02.045823   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:04.635745   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:04.635745   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:04.640663   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:22:04.640663   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.151 22 <nil> <nil>}
	I1226 23:22:04.640663   14940 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-455300-m02 && echo "multinode-455300-m02" | sudo tee /etc/hostname
	I1226 23:22:04.806907   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-455300-m02
	
	I1226 23:22:04.806907   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:06.989474   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:06.989474   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:06.989474   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:09.627169   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:09.627169   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:09.632601   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:22:09.633279   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.151 22 <nil> <nil>}
	I1226 23:22:09.633279   14940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-455300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-455300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-455300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 23:22:09.787878   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 23:22:09.787878   14940 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1226 23:22:09.787878   14940 buildroot.go:174] setting up certificates
	I1226 23:22:09.787878   14940 provision.go:83] configureAuth start
	I1226 23:22:09.787878   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:11.990188   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:11.990188   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:11.990293   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:14.596056   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:14.596056   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:14.596056   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:16.788157   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:16.788213   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:16.788213   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:19.383501   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:19.383501   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:19.383763   14940 provision.go:138] copyHostCerts
	I1226 23:22:19.384047   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1226 23:22:19.384063   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1226 23:22:19.384063   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1226 23:22:19.384835   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1226 23:22:19.385836   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1226 23:22:19.385836   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1226 23:22:19.385836   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1226 23:22:19.386646   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1226 23:22:19.387536   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1226 23:22:19.387536   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1226 23:22:19.388080   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1226 23:22:19.388384   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I1226 23:22:19.389444   14940 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-455300-m02 san=[172.21.184.151 172.21.184.151 localhost 127.0.0.1 minikube multinode-455300-m02]
	I1226 23:22:19.537868   14940 provision.go:172] copyRemoteCerts
	I1226 23:22:19.552043   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 23:22:19.552043   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:21.750842   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:21.750842   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:21.750975   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:24.393903   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:24.394050   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:24.394344   14940 sshutil.go:53] new ssh client: &{IP:172.21.184.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\id_rsa Username:docker}
	I1226 23:22:24.503631   14940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9515884s)
	I1226 23:22:24.503754   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1226 23:22:24.504249   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 23:22:24.548141   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1226 23:22:24.548141   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I1226 23:22:24.588160   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1226 23:22:24.588425   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1226 23:22:24.630228   14940 provision.go:86] duration metric: configureAuth took 14.8422957s
	I1226 23:22:24.630228   14940 buildroot.go:189] setting minikube options for container-runtime
	I1226 23:22:24.630960   14940 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:22:24.631021   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:26.808762   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:26.809060   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:26.809060   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:29.384990   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:29.385166   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:29.391701   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:22:29.392391   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.151 22 <nil> <nil>}
	I1226 23:22:29.392391   14940 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1226 23:22:29.535406   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1226 23:22:29.535529   14940 buildroot.go:70] root file system type: tmpfs
	I1226 23:22:29.535812   14940 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1226 23:22:29.535933   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:31.722596   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:31.722698   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:31.722945   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:34.323757   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:34.323938   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:34.330303   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:22:34.330566   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.151 22 <nil> <nil>}
	I1226 23:22:34.331095   14940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.21.182.57"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1226 23:22:34.494547   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.21.182.57
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1226 23:22:34.495084   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:36.643040   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:36.643040   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:36.643160   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:39.264478   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:39.264577   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:39.270152   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:22:39.270926   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.151 22 <nil> <nil>}
	I1226 23:22:39.270926   14940 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1226 23:22:40.604407   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1226 23:22:40.604407   14940 machine.go:91] provisioned docker machine in 40.7978553s
	I1226 23:22:40.604407   14940 start.go:300] post-start starting for "multinode-455300-m02" (driver="hyperv")
	I1226 23:22:40.604407   14940 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 23:22:40.617787   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 23:22:40.617787   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:42.836676   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:42.836771   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:42.836771   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:45.445900   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:45.445900   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:45.446272   14940 sshutil.go:53] new ssh client: &{IP:172.21.184.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\id_rsa Username:docker}
	I1226 23:22:45.556413   14940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9385937s)
	I1226 23:22:45.571250   14940 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 23:22:45.575787   14940 command_runner.go:130] > NAME=Buildroot
	I1226 23:22:45.575787   14940 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I1226 23:22:45.575787   14940 command_runner.go:130] > ID=buildroot
	I1226 23:22:45.575787   14940 command_runner.go:130] > VERSION_ID=2021.02.12
	I1226 23:22:45.575787   14940 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1226 23:22:45.576806   14940 info.go:137] Remote host: Buildroot 2021.02.12
	I1226 23:22:45.576806   14940 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1226 23:22:45.576806   14940 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1226 23:22:45.578060   14940 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> 107282.pem in /etc/ssl/certs
	I1226 23:22:45.578060   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> /etc/ssl/certs/107282.pem
	I1226 23:22:45.592640   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 23:22:45.611194   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /etc/ssl/certs/107282.pem (1708 bytes)
	I1226 23:22:45.650556   14940 start.go:303] post-start completed in 5.0461547s
	I1226 23:22:45.650620   14940 fix.go:56] fixHost completed within 1m24.4851763s
	I1226 23:22:45.650682   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:47.847006   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:47.847220   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:47.847220   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:50.417927   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:50.418156   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:50.424338   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:22:50.425096   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.151 22 <nil> <nil>}
	I1226 23:22:50.425096   14940 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1226 23:22:50.565661   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703632970.562811137
	
	I1226 23:22:50.565716   14940 fix.go:206] guest clock: 1703632970.562811137
	I1226 23:22:50.565716   14940 fix.go:219] Guest: 2023-12-26 23:22:50.562811137 +0000 UTC Remote: 2023-12-26 23:22:45.6506208 +0000 UTC m=+232.474476101 (delta=4.912190337s)
	I1226 23:22:50.565716   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:52.762944   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:52.762944   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:52.763068   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:55.363815   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:55.363815   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:55.369424   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:22:55.370749   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.151 22 <nil> <nil>}
	I1226 23:22:55.370749   14940 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1703632970
	I1226 23:22:55.522425   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 26 23:22:50 UTC 2023
	
	I1226 23:22:55.522498   14940 fix.go:226] clock set: Tue Dec 26 23:22:50 UTC 2023
	 (err=<nil>)
	I1226 23:22:55.522498   14940 start.go:83] releasing machines lock for "multinode-455300-m02", held for 1m34.358231s
	I1226 23:22:55.522770   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:57.689168   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:57.689168   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:57.689168   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:23:00.297858   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:23:00.297858   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:00.302408   14940 out.go:177] * Found network options:
	I1226 23:23:00.306495   14940 out.go:177]   - NO_PROXY=172.21.182.57
	W1226 23:23:00.308552   14940 proxy.go:119] fail to check proxy env: Error ip not in block
	I1226 23:23:00.311816   14940 out.go:177]   - NO_PROXY=172.21.182.57
	W1226 23:23:00.314257   14940 proxy.go:119] fail to check proxy env: Error ip not in block
	W1226 23:23:00.316001   14940 proxy.go:119] fail to check proxy env: Error ip not in block
	I1226 23:23:00.318516   14940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 23:23:00.319048   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:23:00.330632   14940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 23:23:00.330632   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:23:02.554383   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:23:02.554574   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:02.554574   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:23:02.585119   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:23:02.585119   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:02.585119   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:23:05.204799   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:23:05.204799   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:05.204799   14940 sshutil.go:53] new ssh client: &{IP:172.21.184.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\id_rsa Username:docker}
	I1226 23:23:05.225400   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:23:05.225400   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:05.225400   14940 sshutil.go:53] new ssh client: &{IP:172.21.184.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\id_rsa Username:docker}
	I1226 23:23:05.312294   14940 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1226 23:23:05.312900   14940 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9821031s)
	W1226 23:23:05.312900   14940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1226 23:23:05.326954   14940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 23:23:05.395243   14940 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1226 23:23:05.395243   14940 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0761964s)
	I1226 23:23:05.396211   14940 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1226 23:23:05.396211   14940 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1226 23:23:05.396339   14940 start.go:475] detecting cgroup driver to use...
	I1226 23:23:05.396554   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 23:23:05.429601   14940 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1226 23:23:05.443456   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1226 23:23:05.476860   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1226 23:23:05.492898   14940 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1226 23:23:05.504989   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1226 23:23:05.533288   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 23:23:05.563788   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1226 23:23:05.593129   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 23:23:05.622133   14940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 23:23:05.653707   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1226 23:23:05.684451   14940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 23:23:05.701321   14940 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1226 23:23:05.715176   14940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 23:23:05.746832   14940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:23:05.929680   14940 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1226 23:23:05.959022   14940 start.go:475] detecting cgroup driver to use...
	I1226 23:23:05.973118   14940 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1226 23:23:05.993102   14940 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1226 23:23:05.994163   14940 command_runner.go:130] > [Unit]
	I1226 23:23:05.994163   14940 command_runner.go:130] > Description=Docker Application Container Engine
	I1226 23:23:05.994163   14940 command_runner.go:130] > Documentation=https://docs.docker.com
	I1226 23:23:05.994163   14940 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1226 23:23:05.994163   14940 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1226 23:23:05.994163   14940 command_runner.go:130] > StartLimitBurst=3
	I1226 23:23:05.994163   14940 command_runner.go:130] > StartLimitIntervalSec=60
	I1226 23:23:05.994163   14940 command_runner.go:130] > [Service]
	I1226 23:23:05.994163   14940 command_runner.go:130] > Type=notify
	I1226 23:23:05.994163   14940 command_runner.go:130] > Restart=on-failure
	I1226 23:23:05.994163   14940 command_runner.go:130] > Environment=NO_PROXY=172.21.182.57
	I1226 23:23:05.994163   14940 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1226 23:23:05.994163   14940 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1226 23:23:05.994163   14940 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1226 23:23:05.994163   14940 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1226 23:23:05.994163   14940 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1226 23:23:05.994163   14940 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1226 23:23:05.994163   14940 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1226 23:23:05.994163   14940 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1226 23:23:05.994163   14940 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1226 23:23:05.994163   14940 command_runner.go:130] > ExecStart=
	I1226 23:23:05.994163   14940 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1226 23:23:05.994163   14940 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1226 23:23:05.994163   14940 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1226 23:23:05.994163   14940 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1226 23:23:05.994163   14940 command_runner.go:130] > LimitNOFILE=infinity
	I1226 23:23:05.994163   14940 command_runner.go:130] > LimitNPROC=infinity
	I1226 23:23:05.994163   14940 command_runner.go:130] > LimitCORE=infinity
	I1226 23:23:05.994163   14940 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1226 23:23:05.994163   14940 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1226 23:23:05.994163   14940 command_runner.go:130] > TasksMax=infinity
	I1226 23:23:05.994163   14940 command_runner.go:130] > TimeoutStartSec=0
	I1226 23:23:05.994163   14940 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1226 23:23:05.994163   14940 command_runner.go:130] > Delegate=yes
	I1226 23:23:05.994163   14940 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1226 23:23:05.994163   14940 command_runner.go:130] > KillMode=process
	I1226 23:23:05.994163   14940 command_runner.go:130] > [Install]
	I1226 23:23:05.994163   14940 command_runner.go:130] > WantedBy=multi-user.target
	I1226 23:23:06.008099   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 23:23:06.040097   14940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 23:23:06.079091   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 23:23:06.116040   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1226 23:23:06.157147   14940 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1226 23:23:06.220663   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1226 23:23:06.242821   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 23:23:06.273078   14940 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1226 23:23:06.286747   14940 ssh_runner.go:195] Run: which cri-dockerd
	I1226 23:23:06.292803   14940 command_runner.go:130] > /usr/bin/cri-dockerd
	I1226 23:23:06.307049   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1226 23:23:06.325093   14940 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1226 23:23:06.370382   14940 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1226 23:23:06.551335   14940 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1226 23:23:06.711334   14940 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1226 23:23:06.711334   14940 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1226 23:23:06.755460   14940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:23:06.926200   14940 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1226 23:23:08.594083   14940 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6678828s)
	I1226 23:23:08.606539   14940 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1226 23:23:08.789966   14940 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1226 23:23:08.976889   14940 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1226 23:23:09.162811   14940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:23:09.345730   14940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1226 23:23:09.394212   14940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:23:09.574500   14940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1226 23:23:09.689865   14940 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1226 23:23:09.702162   14940 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1226 23:23:09.709171   14940 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1226 23:23:09.709171   14940 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1226 23:23:09.709171   14940 command_runner.go:130] > Device: 16h/22d	Inode: 889         Links: 1
	I1226 23:23:09.709171   14940 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1226 23:23:09.710026   14940 command_runner.go:130] > Access: 2023-12-26 23:23:09.595575506 +0000
	I1226 23:23:09.710026   14940 command_runner.go:130] > Modify: 2023-12-26 23:23:09.595575506 +0000
	I1226 23:23:09.710026   14940 command_runner.go:130] > Change: 2023-12-26 23:23:09.599575506 +0000
	I1226 23:23:09.710026   14940 command_runner.go:130] >  Birth: -
	I1226 23:23:09.710238   14940 start.go:543] Will wait 60s for crictl version
	I1226 23:23:09.724381   14940 ssh_runner.go:195] Run: which crictl
	I1226 23:23:09.728364   14940 command_runner.go:130] > /usr/bin/crictl
	I1226 23:23:09.742585   14940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 23:23:09.819874   14940 command_runner.go:130] > Version:  0.1.0
	I1226 23:23:09.819874   14940 command_runner.go:130] > RuntimeName:  docker
	I1226 23:23:09.819874   14940 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1226 23:23:09.819974   14940 command_runner.go:130] > RuntimeApiVersion:  v1
	I1226 23:23:09.819974   14940 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1226 23:23:09.830702   14940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1226 23:23:09.867621   14940 command_runner.go:130] > 24.0.7
	I1226 23:23:09.876623   14940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1226 23:23:09.912632   14940 command_runner.go:130] > 24.0.7
	I1226 23:23:09.917182   14940 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1226 23:23:09.919708   14940 out.go:177]   - env NO_PROXY=172.21.182.57
	I1226 23:23:09.922104   14940 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1226 23:23:09.925927   14940 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1226 23:23:09.925927   14940 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1226 23:23:09.925927   14940 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1226 23:23:09.925927   14940 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4e:ec:d4 Flags:up|broadcast|multicast|running}
	I1226 23:23:09.929289   14940 ip.go:210] interface addr: fe80::1f69:6bdb:2000:8fcd/64
	I1226 23:23:09.929289   14940 ip.go:210] interface addr: 172.21.176.1/20
	I1226 23:23:09.940981   14940 ssh_runner.go:195] Run: grep 172.21.176.1	host.minikube.internal$ /etc/hosts
	I1226 23:23:09.947023   14940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.21.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 23:23:09.972056   14940 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300 for IP: 172.21.184.151
	I1226 23:23:09.972178   14940 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 23:23:09.972968   14940 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1226 23:23:09.972968   14940 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1226 23:23:09.973501   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1226 23:23:09.973922   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1226 23:23:09.974189   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1226 23:23:09.974486   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1226 23:23:09.975562   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem (1338 bytes)
	W1226 23:23:09.976003   14940 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728_empty.pem, impossibly tiny 0 bytes
	I1226 23:23:09.976232   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1226 23:23:09.976755   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1226 23:23:09.977320   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1226 23:23:09.977865   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1226 23:23:09.978956   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem (1708 bytes)
	I1226 23:23:09.979265   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem -> /usr/share/ca-certificates/10728.pem
	I1226 23:23:09.979568   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> /usr/share/ca-certificates/107282.pem
	I1226 23:23:09.979889   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:23:09.980854   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 23:23:10.021837   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 23:23:10.063353   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 23:23:10.106459   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1226 23:23:10.147242   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem --> /usr/share/ca-certificates/10728.pem (1338 bytes)
	I1226 23:23:10.190717   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /usr/share/ca-certificates/107282.pem (1708 bytes)
	I1226 23:23:10.233539   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 23:23:10.290367   14940 ssh_runner.go:195] Run: openssl version
	I1226 23:23:10.299035   14940 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1226 23:23:10.311927   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107282.pem && ln -fs /usr/share/ca-certificates/107282.pem /etc/ssl/certs/107282.pem"
	I1226 23:23:10.353069   14940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107282.pem
	I1226 23:23:10.360111   14940 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 26 22:04 /usr/share/ca-certificates/107282.pem
	I1226 23:23:10.360311   14940 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 26 22:04 /usr/share/ca-certificates/107282.pem
	I1226 23:23:10.374253   14940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107282.pem
	I1226 23:23:10.383405   14940 command_runner.go:130] > 3ec20f2e
	I1226 23:23:10.397586   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107282.pem /etc/ssl/certs/3ec20f2e.0"
	I1226 23:23:10.432447   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 23:23:10.465823   14940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:23:10.472767   14940 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 26 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:23:10.472967   14940 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:23:10.485697   14940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:23:10.494265   14940 command_runner.go:130] > b5213941
	I1226 23:23:10.507730   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 23:23:10.540006   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10728.pem && ln -fs /usr/share/ca-certificates/10728.pem /etc/ssl/certs/10728.pem"
	I1226 23:23:10.569799   14940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10728.pem
	I1226 23:23:10.576665   14940 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 26 22:04 /usr/share/ca-certificates/10728.pem
	I1226 23:23:10.576878   14940 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 26 22:04 /usr/share/ca-certificates/10728.pem
	I1226 23:23:10.591393   14940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10728.pem
	I1226 23:23:10.600999   14940 command_runner.go:130] > 51391683
	I1226 23:23:10.614529   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10728.pem /etc/ssl/certs/51391683.0"
	I1226 23:23:10.644520   14940 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 23:23:10.651523   14940 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 23:23:10.652112   14940 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 23:23:10.662496   14940 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1226 23:23:10.700781   14940 command_runner.go:130] > cgroupfs
	I1226 23:23:10.701388   14940 cni.go:84] Creating CNI manager for ""
	I1226 23:23:10.701452   14940 cni.go:136] 3 nodes found, recommending kindnet
	I1226 23:23:10.701452   14940 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 23:23:10.701518   14940 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.21.184.151 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-455300 NodeName:multinode-455300-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.21.182.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.21.184.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1226 23:23:10.701824   14940 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.21.184.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-455300-m02"
	  kubeletExtraArgs:
	    node-ip: 172.21.184.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.21.182.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 23:23:10.701962   14940 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-455300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.21.184.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-455300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 23:23:10.715588   14940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1226 23:23:10.734162   14940 command_runner.go:130] > kubeadm
	I1226 23:23:10.734162   14940 command_runner.go:130] > kubectl
	I1226 23:23:10.734162   14940 command_runner.go:130] > kubelet
	I1226 23:23:10.734162   14940 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 23:23:10.746188   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1226 23:23:10.762839   14940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I1226 23:23:10.791594   14940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1226 23:23:10.834342   14940 ssh_runner.go:195] Run: grep 172.21.182.57	control-plane.minikube.internal$ /etc/hosts
	I1226 23:23:10.840293   14940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.21.182.57	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 23:23:10.858813   14940 host.go:66] Checking if "multinode-455300" exists ...
	I1226 23:23:10.858951   14940 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:23:10.858951   14940 start.go:304] JoinCluster: &{Name:multinode-455300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-455300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.21.182.57 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.21.184.151 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.21.188.21 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 23:23:10.859566   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1226 23:23:10.859653   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:23:13.050673   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:23:13.050673   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:13.050792   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:23:15.670171   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:23:15.670171   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:15.670505   14940 sshutil.go:53] new ssh client: &{IP:172.21.182.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 23:23:15.891033   14940 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token e29sv0.49niog2zfjqw7ep9 --discovery-token-ca-cert-hash sha256:59aae3d8e090ab5d371ccfd9b23acdfdebb77e43326e89bce43646e9b1304925 
	I1226 23:23:15.891137   14940 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (5.031515s)
	I1226 23:23:15.891137   14940 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.21.184.151 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1226 23:23:15.891251   14940 host.go:66] Checking if "multinode-455300" exists ...
	I1226 23:23:15.906115   14940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-455300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1226 23:23:15.906115   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:23:18.083433   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:23:18.083602   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:18.083602   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:23:20.693709   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:23:20.693709   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:20.694038   14940 sshutil.go:53] new ssh client: &{IP:172.21.182.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 23:23:20.879991   14940 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1226 23:23:20.977635   14940 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-zt55b, kube-system/kube-proxy-bqlf8
	I1226 23:23:23.015222   14940 command_runner.go:130] > node/multinode-455300-m02 cordoned
	I1226 23:23:23.015222   14940 command_runner.go:130] > pod "busybox-5bc68d56bd-bskhd" has DeletionTimestamp older than 1 seconds, skipping
	I1226 23:23:23.015222   14940 command_runner.go:130] > node/multinode-455300-m02 drained
	I1226 23:23:23.015222   14940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-455300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (7.1091083s)
	I1226 23:23:23.015345   14940 node.go:108] successfully drained node "m02"
	I1226 23:23:23.016613   14940 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:23:23.017563   14940 kapi.go:59] client config for multinode-455300: &rest.Config{Host:"https://172.21.182.57:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 23:23:23.018649   14940 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1226 23:23:23.018950   14940 round_trippers.go:463] DELETE https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:23.018950   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:23.019013   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:23.019013   14940 round_trippers.go:473]     Content-Type: application/json
	I1226 23:23:23.019013   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:23.044652   14940 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I1226 23:23:23.044652   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:23.044652   14940 round_trippers.go:580]     Audit-Id: 0dc0e723-0792-4aaf-90d1-86b99175594b
	I1226 23:23:23.044652   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:23.044652   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:23.044652   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:23.044652   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:23.044946   14940 round_trippers.go:580]     Content-Length: 171
	I1226 23:23:23.044946   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:23 GMT
	I1226 23:23:23.045025   14940 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-455300-m02","kind":"nodes","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d"}}
	I1226 23:23:23.045086   14940 node.go:124] successfully deleted node "m02"
	I1226 23:23:23.045162   14940 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.21.184.151 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1226 23:23:23.045231   14940 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.21.184.151 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1226 23:23:23.045231   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e29sv0.49niog2zfjqw7ep9 --discovery-token-ca-cert-hash sha256:59aae3d8e090ab5d371ccfd9b23acdfdebb77e43326e89bce43646e9b1304925 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-455300-m02"
	I1226 23:23:23.314053   14940 command_runner.go:130] ! W1226 23:23:23.312501    1365 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1226 23:23:23.935875   14940 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 23:23:25.778060   14940 command_runner.go:130] > [preflight] Running pre-flight checks
	I1226 23:23:25.778060   14940 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1226 23:23:25.778060   14940 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1226 23:23:25.778060   14940 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 23:23:25.778060   14940 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 23:23:25.778060   14940 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1226 23:23:25.778060   14940 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1226 23:23:25.778060   14940 command_runner.go:130] > This node has joined the cluster:
	I1226 23:23:25.778060   14940 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1226 23:23:25.778060   14940 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1226 23:23:25.778060   14940 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1226 23:23:25.778060   14940 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e29sv0.49niog2zfjqw7ep9 --discovery-token-ca-cert-hash sha256:59aae3d8e090ab5d371ccfd9b23acdfdebb77e43326e89bce43646e9b1304925 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-455300-m02": (2.7327815s)
	I1226 23:23:25.778060   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1226 23:23:26.066067   14940 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1226 23:23:26.299693   14940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b minikube.k8s.io/name=multinode-455300 minikube.k8s.io/updated_at=2023_12_26T23_23_26_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 23:23:26.474138   14940 command_runner.go:130] > node/multinode-455300-m02 labeled
	I1226 23:23:26.474221   14940 command_runner.go:130] > node/multinode-455300-m03 labeled
	I1226 23:23:26.474289   14940 start.go:306] JoinCluster complete in 15.6153408s
	I1226 23:23:26.474289   14940 cni.go:84] Creating CNI manager for ""
	I1226 23:23:26.474289   14940 cni.go:136] 3 nodes found, recommending kindnet
	I1226 23:23:26.487930   14940 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1226 23:23:26.497065   14940 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1226 23:23:26.497255   14940 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1226 23:23:26.497255   14940 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1226 23:23:26.497255   14940 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 23:23:26.497255   14940 command_runner.go:130] > Access: 2023-12-26 23:19:30.718927400 +0000
	I1226 23:23:26.497368   14940 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I1226 23:23:26.497368   14940 command_runner.go:130] > Change: 2023-12-26 23:19:18.490000000 +0000
	I1226 23:23:26.497368   14940 command_runner.go:130] >  Birth: -
	I1226 23:23:26.497512   14940 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1226 23:23:26.497512   14940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1226 23:23:26.559138   14940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1226 23:23:27.095450   14940 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1226 23:23:27.095553   14940 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1226 23:23:27.095553   14940 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1226 23:23:27.095553   14940 command_runner.go:130] > daemonset.apps/kindnet configured
	I1226 23:23:27.097264   14940 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:23:27.098532   14940 kapi.go:59] client config for multinode-455300: &rest.Config{Host:"https://172.21.182.57:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 23:23:27.100098   14940 round_trippers.go:463] GET https://172.21.182.57:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1226 23:23:27.100098   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:27.100098   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:27.100098   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:27.108700   14940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1226 23:23:27.109076   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:27.109076   14940 round_trippers.go:580]     Audit-Id: 0915df5d-6a11-459c-aa74-f686939ee533
	I1226 23:23:27.109076   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:27.109137   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:27.109137   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:27.109137   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:27.109137   14940 round_trippers.go:580]     Content-Length: 292
	I1226 23:23:27.109202   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:27 GMT
	I1226 23:23:27.109202   14940 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d040dd96-d104-4852-b930-38d82a1c4e71","resourceVersion":"1867","creationTimestamp":"2023-12-26T22:58:16Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1226 23:23:27.109421   14940 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-455300" context rescaled to 1 replicas
	I1226 23:23:27.109421   14940 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.21.184.151 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1226 23:23:27.111996   14940 out.go:177] * Verifying Kubernetes components...
	I1226 23:23:27.127989   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 23:23:27.152060   14940 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:23:27.152881   14940 kapi.go:59] client config for multinode-455300: &rest.Config{Host:"https://172.21.182.57:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 23:23:27.153771   14940 node_ready.go:35] waiting up to 6m0s for node "multinode-455300-m02" to be "Ready" ...
	I1226 23:23:27.153771   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:27.153771   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:27.153771   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:27.153771   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:27.165360   14940 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1226 23:23:27.165360   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:27.165360   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:27.165360   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:27.165360   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:27.165360   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:27 GMT
	I1226 23:23:27.165360   14940 round_trippers.go:580]     Audit-Id: f91baa4d-72f2-40df-89d4-56a0cf0559a8
	I1226 23:23:27.165360   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:27.165360   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2019","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3559 chars]
	I1226 23:23:27.668456   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:27.668456   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:27.668544   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:27.668544   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:27.673011   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:27.673011   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:27.673011   14940 round_trippers.go:580]     Audit-Id: 75f1d71d-cf6e-4a18-b949-03552444d86f
	I1226 23:23:27.673011   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:27.673011   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:27.673011   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:27.673011   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:27.673134   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:27 GMT
	I1226 23:23:27.673300   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2019","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3559 chars]
	I1226 23:23:28.173271   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:28.173271   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:28.173271   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:28.173271   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:28.177990   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:28.177990   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:28.178230   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:28.178230   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:28.178230   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:28 GMT
	I1226 23:23:28.178230   14940 round_trippers.go:580]     Audit-Id: 7138b214-5c64-4341-b442-37c1cb5605e0
	I1226 23:23:28.178230   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:28.178348   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:28.178541   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:28.657210   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:28.657456   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:28.657456   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:28.657456   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:28.665033   14940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 23:23:28.665123   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:28.665123   14940 round_trippers.go:580]     Audit-Id: fe4bade5-06a6-4cd8-8eb5-c7e60d4baaa3
	I1226 23:23:28.665123   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:28.665123   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:28.665123   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:28.665123   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:28.665200   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:28 GMT
	I1226 23:23:28.665200   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:29.160911   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:29.160911   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:29.160911   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:29.161001   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:29.165154   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:29.165154   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:29.165154   14940 round_trippers.go:580]     Audit-Id: a3fcdc92-0671-4046-824c-330b5773d3e3
	I1226 23:23:29.166116   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:29.166116   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:29.166116   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:29.166162   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:29.166162   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:29 GMT
	I1226 23:23:29.166235   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:29.166235   14940 node_ready.go:58] node "multinode-455300-m02" has status "Ready":"False"
	I1226 23:23:29.661579   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:29.661579   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:29.661579   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:29.661579   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:29.664982   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:29.665941   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:29.665941   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:29.665941   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:29.665941   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:29 GMT
	I1226 23:23:29.665941   14940 round_trippers.go:580]     Audit-Id: 930bf22f-5776-43cf-ae07-87ce837252e6
	I1226 23:23:29.665941   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:29.665941   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:29.666092   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:30.162411   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:30.162541   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:30.162541   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:30.162541   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:30.166914   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:30.166914   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:30.166914   14940 round_trippers.go:580]     Audit-Id: ac7524ba-66fb-428c-bac0-7f399fb0bd82
	I1226 23:23:30.166914   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:30.166914   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:30.167702   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:30.167702   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:30.167702   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:30 GMT
	I1226 23:23:30.167895   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:30.664551   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:30.664634   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:30.664743   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:30.664743   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:30.668620   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:30.668932   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:30.668932   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:30 GMT
	I1226 23:23:30.668932   14940 round_trippers.go:580]     Audit-Id: 5e86745f-d4b4-42f8-800a-571d6080df73
	I1226 23:23:30.668932   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:30.668932   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:30.668932   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:30.668932   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:30.669218   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:31.156986   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:31.157044   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:31.157044   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:31.157044   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:31.160638   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:31.160638   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:31.161244   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:31.161244   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:31.161244   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:31.161244   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:31 GMT
	I1226 23:23:31.161244   14940 round_trippers.go:580]     Audit-Id: 9a837fa0-f630-4b6a-b359-37cd3b16a4ac
	I1226 23:23:31.161244   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:31.161520   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:31.660028   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:31.660103   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:31.660103   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:31.660103   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:31.665855   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:23:31.665977   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:31.665977   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:31.666071   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:31 GMT
	I1226 23:23:31.666071   14940 round_trippers.go:580]     Audit-Id: 1a03cd83-1d04-4ec4-823b-a93045150fd7
	I1226 23:23:31.666071   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:31.666071   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:31.666131   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:31.666178   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:31.667062   14940 node_ready.go:58] node "multinode-455300-m02" has status "Ready":"False"
	I1226 23:23:32.164347   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:32.164347   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:32.164347   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:32.164347   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:32.168871   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:32.168871   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:32.168871   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:32.168871   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:32.169147   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:32.169147   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:32 GMT
	I1226 23:23:32.169147   14940 round_trippers.go:580]     Audit-Id: eb514e48-2845-4f1a-b887-400efdb9e1de
	I1226 23:23:32.169147   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:32.169417   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:32.667712   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:32.667712   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:32.667712   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:32.667712   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:32.672308   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:32.672308   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:32.673084   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:32.673195   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:32.673271   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:32 GMT
	I1226 23:23:32.673414   14940 round_trippers.go:580]     Audit-Id: 20f9fbbb-61ca-453e-bbe1-7471493a3232
	I1226 23:23:32.673414   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:32.673414   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:32.673414   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:33.168407   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:33.168407   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:33.168513   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:33.168513   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:33.172832   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:33.172832   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:33.172832   14940 round_trippers.go:580]     Audit-Id: 1b94f620-7106-4a74-82aa-2ee0481c416a
	I1226 23:23:33.172832   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:33.172832   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:33.172832   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:33.173772   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:33.173772   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:33 GMT
	I1226 23:23:33.173895   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:33.668483   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:33.668483   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:33.668483   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:33.668483   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:33.672100   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:33.672100   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:33.672100   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:33.673061   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:33.673061   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:33.673061   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:33 GMT
	I1226 23:23:33.673061   14940 round_trippers.go:580]     Audit-Id: e2390716-05e8-4266-acec-533770fce369
	I1226 23:23:33.673061   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:33.673135   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:33.673770   14940 node_ready.go:58] node "multinode-455300-m02" has status "Ready":"False"
	I1226 23:23:34.167453   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:34.167453   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:34.167534   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:34.167534   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:34.173793   14940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 23:23:34.173793   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:34.173793   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:34 GMT
	I1226 23:23:34.173793   14940 round_trippers.go:580]     Audit-Id: 4f324aee-825b-4cb6-bc59-cc4d9df052c8
	I1226 23:23:34.173793   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:34.173793   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:34.173793   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:34.173793   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:34.173793   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:34.667996   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:34.667996   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:34.667996   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:34.667996   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:34.673019   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:34.673019   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:34.673019   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:34 GMT
	I1226 23:23:34.673019   14940 round_trippers.go:580]     Audit-Id: ca8ae0ed-e4c6-4b9c-a5bf-a90764826254
	I1226 23:23:34.673019   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:34.673019   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:34.673173   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:34.673173   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:34.673437   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:35.157498   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:35.157624   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.157624   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.157624   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.162032   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:35.162098   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.162098   14940 round_trippers.go:580]     Audit-Id: a10b0910-0d46-4891-81d0-b0169f1b015a
	I1226 23:23:35.162098   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.162098   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.162098   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.162098   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.162173   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.162396   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2045","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3926 chars]
	I1226 23:23:35.163054   14940 node_ready.go:49] node "multinode-455300-m02" has status "Ready":"True"
	I1226 23:23:35.163127   14940 node_ready.go:38] duration metric: took 8.0092841s waiting for node "multinode-455300-m02" to be "Ready" ...
	I1226 23:23:35.163127   14940 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 23:23:35.163238   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods
	I1226 23:23:35.163407   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.163407   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.163407   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.169027   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:23:35.169799   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.171354   14940 round_trippers.go:580]     Audit-Id: 895f048d-d416-4dcc-bb96-e88162832909
	I1226 23:23:35.171354   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.171354   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.171354   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.171354   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.171354   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.173318   14940 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2047"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1863","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83385 chars]
	I1226 23:23:35.177313   14940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.178195   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:23:35.178195   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.178195   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.178195   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.183300   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:23:35.184176   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.184254   14940 round_trippers.go:580]     Audit-Id: 188c90c0-665f-4196-9be5-e35d15a33c2d
	I1226 23:23:35.184254   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.184254   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.184285   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.184285   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.184285   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.184495   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1863","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I1226 23:23:35.184714   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:23:35.184714   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.184714   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.184714   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.189327   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:35.189327   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.189327   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.189327   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.189327   14940 round_trippers.go:580]     Audit-Id: c517bdfc-5a8e-4f03-996f-5284055b2f3f
	I1226 23:23:35.189327   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.190202   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.190202   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.190662   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:23:35.191108   14940 pod_ready.go:92] pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace has status "Ready":"True"
	I1226 23:23:35.191108   14940 pod_ready.go:81] duration metric: took 13.7957ms waiting for pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.191108   14940 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.191108   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-455300
	I1226 23:23:35.191108   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.191108   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.191108   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.194708   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:35.194708   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.194708   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.194920   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.194920   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.194920   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.194920   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.194920   14940 round_trippers.go:580]     Audit-Id: 16f13b84-9f66-4136-b3d9-7377653aeeff
	I1226 23:23:35.195150   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-455300","namespace":"kube-system","uid":"cfd4d580-b2a8-4ff1-8d3c-c6b50f9bf86e","resourceVersion":"1834","creationTimestamp":"2023-12-26T23:21:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.21.182.57:2379","kubernetes.io/config.hash":"d4e365efddd1bd5d58716ef3ab8705bc","kubernetes.io/config.mirror":"d4e365efddd1bd5d58716ef3ab8705bc","kubernetes.io/config.seen":"2023-12-26T23:20:52.614240428Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:21:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I1226 23:23:35.195222   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:23:35.195222   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.195222   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.195222   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.197833   14940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:23:35.197833   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.197833   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.197833   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.197833   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.198777   14940 round_trippers.go:580]     Audit-Id: 6633b134-105e-4a9a-9f93-724b2b514eb9
	I1226 23:23:35.198777   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.198777   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.199024   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:23:35.199024   14940 pod_ready.go:92] pod "etcd-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:23:35.199024   14940 pod_ready.go:81] duration metric: took 7.9152ms waiting for pod "etcd-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.199024   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.199602   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-455300
	I1226 23:23:35.199602   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.199602   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.199602   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.202610   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:35.202610   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.202610   14940 round_trippers.go:580]     Audit-Id: f9cbf07f-5190-4ed0-8d2c-9d73606970af
	I1226 23:23:35.202610   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.203675   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.203675   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.203675   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.203675   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.204614   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-455300","namespace":"kube-system","uid":"bbe5516b-f745-4a20-8df3-3cd3ac15d7f6","resourceVersion":"1836","creationTimestamp":"2023-12-26T23:21:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.21.182.57:8443","kubernetes.io/config.hash":"3c3021eb66c7ec14c35bcec06843a329","kubernetes.io/config.mirror":"3c3021eb66c7ec14c35bcec06843a329","kubernetes.io/config.seen":"2023-12-26T23:20:52.614245928Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:21:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I1226 23:23:35.204614   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:23:35.204614   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.204614   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.204614   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.208605   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:35.208815   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.208815   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.208815   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.208815   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.208815   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.208815   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.208815   14940 round_trippers.go:580]     Audit-Id: 0f0fc1a0-8d01-41d1-8334-265b1098df77
	I1226 23:23:35.208887   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:23:35.208887   14940 pod_ready.go:92] pod "kube-apiserver-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:23:35.208887   14940 pod_ready.go:81] duration metric: took 9.863ms waiting for pod "kube-apiserver-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.208887   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.209449   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-455300
	I1226 23:23:35.209449   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.209449   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.209449   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.212524   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:35.212892   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.212892   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.212892   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.212892   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.212892   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.212892   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.212983   14940 round_trippers.go:580]     Audit-Id: 71485d2a-cb4b-4806-9cc6-2e72e1471ca9
	I1226 23:23:35.213300   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-455300","namespace":"kube-system","uid":"fdaf236b-e792-4278-908c-34b337b97beb","resourceVersion":"1844","creationTimestamp":"2023-12-26T22:58:13Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"390e7b7338e932006306396347e13bcf","kubernetes.io/config.mirror":"390e7b7338e932006306396347e13bcf","kubernetes.io/config.seen":"2023-12-26T22:58:06.456140564Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I1226 23:23:35.213805   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:23:35.213805   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.213805   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.213805   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.217436   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:35.217525   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.217525   14940 round_trippers.go:580]     Audit-Id: 459e9c88-a1b0-488b-a825-652a06a7c1ac
	I1226 23:23:35.217525   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.217525   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.217525   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.217583   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.217583   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.217771   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:23:35.218238   14940 pod_ready.go:92] pod "kube-controller-manager-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:23:35.218298   14940 pod_ready.go:81] duration metric: took 9.4115ms waiting for pod "kube-controller-manager-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.218353   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2pfcl" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.359187   14940 request.go:629] Waited for 140.7457ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pfcl
	I1226 23:23:35.359505   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pfcl
	I1226 23:23:35.359505   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.359505   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.359505   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.364854   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:23:35.364937   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.364937   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.364937   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.364937   14940 round_trippers.go:580]     Audit-Id: 0a39eb0c-2aca-4849-bc6e-ead8d68962f8
	I1226 23:23:35.364937   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.364937   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.364937   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.365477   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2pfcl","generateName":"kube-proxy-","namespace":"kube-system","uid":"61b5d2fb-802c-4b84-b7fa-7a7e9e024028","resourceVersion":"1897","creationTimestamp":"2023-12-26T23:06:10Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:06:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5972 chars]
	I1226 23:23:35.562262   14940 request.go:629] Waited for 195.6718ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m03
	I1226 23:23:35.562581   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m03
	I1226 23:23:35.562581   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.562581   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.562581   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.565784   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:35.565784   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.565784   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.565784   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.565784   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.565784   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.565784   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.565784   14940 round_trippers.go:580]     Audit-Id: a2168479-0369-4f76-ad17-da8dc1ea5a38
	I1226 23:23:35.566813   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m03","uid":"ef364efe-5dc7-4fb4-bc7c-76a3eaa41ba4","resourceVersion":"2020","creationTimestamp":"2023-12-26T23:16:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:16:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4392 chars]
	I1226 23:23:35.567349   14940 pod_ready.go:97] node "multinode-455300-m03" hosting pod "kube-proxy-2pfcl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300-m03" has status "Ready":"Unknown"
	I1226 23:23:35.567447   14940 pod_ready.go:81] duration metric: took 349.0941ms waiting for pod "kube-proxy-2pfcl" in "kube-system" namespace to be "Ready" ...
	E1226 23:23:35.567490   14940 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-455300-m03" hosting pod "kube-proxy-2pfcl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300-m03" has status "Ready":"Unknown"
	I1226 23:23:35.567490   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqlf8" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.766959   14940 request.go:629] Waited for 199.3508ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqlf8
	I1226 23:23:35.767408   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqlf8
	I1226 23:23:35.767408   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.767408   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.767408   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.775150   14940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 23:23:35.775150   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.775150   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.775289   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.775289   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.775289   14940 round_trippers.go:580]     Audit-Id: e5ea6ac2-3d81-4a66-a42c-c4775bf6e8ea
	I1226 23:23:35.775289   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.775289   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.775632   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bqlf8","generateName":"kube-proxy-","namespace":"kube-system","uid":"1caff24c-909f-42a9-a4b8-d9c8c1ec8828","resourceVersion":"2030","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5751 chars]
	I1226 23:23:35.967005   14940 request.go:629] Waited for 190.7404ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:35.967345   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:35.967345   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.967420   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.967420   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.971089   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:35.971846   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.971846   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.971846   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.971846   14940 round_trippers.go:580]     Audit-Id: 1871250c-0b61-462a-8acf-97cd12a37cb0
	I1226 23:23:35.971846   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.971846   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.971846   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.972323   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2045","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3926 chars]
	I1226 23:23:35.972793   14940 pod_ready.go:92] pod "kube-proxy-bqlf8" in "kube-system" namespace has status "Ready":"True"
	I1226 23:23:35.972921   14940 pod_ready.go:81] duration metric: took 405.4316ms waiting for pod "kube-proxy-bqlf8" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.972921   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hzcqb" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:36.162478   14940 request.go:629] Waited for 189.164ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzcqb
	I1226 23:23:36.162729   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzcqb
	I1226 23:23:36.162729   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:36.162729   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:36.162729   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:36.169803   14940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 23:23:36.169803   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:36.169803   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:36.169803   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:36.169803   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:36.169803   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:36 GMT
	I1226 23:23:36.169803   14940 round_trippers.go:580]     Audit-Id: 46e4cf6b-a084-4670-aa27-ffe2fecaa858
	I1226 23:23:36.169803   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:36.169803   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hzcqb","generateName":"kube-proxy-","namespace":"kube-system","uid":"0027fd42-fa64-4d1d-acc8-36e7b41e4838","resourceVersion":"1829","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I1226 23:23:36.370126   14940 request.go:629] Waited for 199.1839ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:23:36.370126   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:23:36.370126   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:36.370126   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:36.370126   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:36.374528   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:36.375521   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:36.375521   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:36 GMT
	I1226 23:23:36.375521   14940 round_trippers.go:580]     Audit-Id: 413120ef-0209-47d1-aa3d-b0b82aa3ea57
	I1226 23:23:36.375604   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:36.375604   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:36.375660   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:36.375660   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:36.375660   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:23:36.376279   14940 pod_ready.go:92] pod "kube-proxy-hzcqb" in "kube-system" namespace has status "Ready":"True"
	I1226 23:23:36.376279   14940 pod_ready.go:81] duration metric: took 403.3584ms waiting for pod "kube-proxy-hzcqb" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:36.376279   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:36.571563   14940 request.go:629] Waited for 195.2841ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-455300
	I1226 23:23:36.571763   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-455300
	I1226 23:23:36.571861   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:36.571861   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:36.571861   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:36.584905   14940 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1226 23:23:36.584905   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:36.584905   14940 round_trippers.go:580]     Audit-Id: d81ba852-9041-4395-b9af-17dbf875cb21
	I1226 23:23:36.584905   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:36.584905   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:36.584905   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:36.584905   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:36.584905   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:36 GMT
	I1226 23:23:36.584905   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-455300","namespace":"kube-system","uid":"58252b0c-41ef-43ab-b2e8-4bd2a1b21cb1","resourceVersion":"1839","creationTimestamp":"2023-12-26T22:58:17Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ebadb7d5fc522150603669ea98264147","kubernetes.io/config.mirror":"ebadb7d5fc522150603669ea98264147","kubernetes.io/config.seen":"2023-12-26T22:58:16.785831210Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I1226 23:23:36.757415   14940 request.go:629] Waited for 171.2824ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:23:36.757475   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:23:36.757475   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:36.757475   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:36.757475   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:36.761060   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:36.761060   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:36.761060   14940 round_trippers.go:580]     Audit-Id: 55ad0ca4-774f-45a7-8226-5c97f23a3511
	I1226 23:23:36.761060   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:36.761060   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:36.761060   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:36.761060   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:36.761060   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:36 GMT
	I1226 23:23:36.762319   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:23:36.762808   14940 pod_ready.go:92] pod "kube-scheduler-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:23:36.762905   14940 pod_ready.go:81] duration metric: took 386.6254ms waiting for pod "kube-scheduler-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:36.762905   14940 pod_ready.go:38] duration metric: took 1.5997784s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 23:23:36.762973   14940 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 23:23:36.777080   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 23:23:36.800542   14940 system_svc.go:56] duration metric: took 37.5686ms WaitForService to wait for kubelet.
	I1226 23:23:36.800710   14940 kubeadm.go:581] duration metric: took 9.6912905s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 23:23:36.800710   14940 node_conditions.go:102] verifying NodePressure condition ...
	I1226 23:23:36.963211   14940 request.go:629] Waited for 162.1156ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes
	I1226 23:23:36.963296   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes
	I1226 23:23:36.963296   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:36.963296   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:36.963296   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:36.967888   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:36.968038   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:36.968038   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:36 GMT
	I1226 23:23:36.968038   14940 round_trippers.go:580]     Audit-Id: 71658490-be01-4f4e-b61e-d65443e2967b
	I1226 23:23:36.968038   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:36.968038   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:36.968038   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:36.968038   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:36.968373   14940 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2048"},"items":[{"metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15593 chars]
	I1226 23:23:36.970078   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:23:36.970142   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:23:36.970142   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:23:36.970206   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:23:36.970206   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:23:36.970206   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:23:36.970206   14940 node_conditions.go:105] duration metric: took 169.4959ms to run NodePressure ...
	I1226 23:23:36.970206   14940 start.go:228] waiting for startup goroutines ...
	I1226 23:23:36.970283   14940 start.go:242] writing updated cluster config ...
	I1226 23:23:36.985993   14940 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:23:36.985993   14940 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 23:23:36.995074   14940 out.go:177] * Starting worker node multinode-455300-m03 in cluster multinode-455300
	I1226 23:23:36.997925   14940 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 23:23:36.997925   14940 cache.go:56] Caching tarball of preloaded images
	I1226 23:23:36.997925   14940 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1226 23:23:36.998613   14940 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1226 23:23:36.998839   14940 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 23:23:37.001559   14940 start.go:365] acquiring machines lock for multinode-455300-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1226 23:23:37.001675   14940 start.go:369] acquired machines lock for "multinode-455300-m03" in 115.3µs
	I1226 23:23:37.001806   14940 start.go:96] Skipping create...Using existing machine configuration
	I1226 23:23:37.001933   14940 fix.go:54] fixHost starting: m03
	I1226 23:23:37.002483   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:23:39.160105   14940 main.go:141] libmachine: [stdout =====>] : Off
	
	I1226 23:23:39.160105   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:39.160105   14940 fix.go:102] recreateIfNeeded on multinode-455300-m03: state=Stopped err=<nil>
	W1226 23:23:39.160105   14940 fix.go:128] unexpected machine state, will restart: <nil>
	I1226 23:23:39.163193   14940 out.go:177] * Restarting existing hyperv VM for "multinode-455300-m03" ...
	I1226 23:23:39.166460   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-455300-m03
	I1226 23:23:41.651376   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:23:41.651376   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:41.651468   14940 main.go:141] libmachine: Waiting for host to start...
	I1226 23:23:41.651468   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:23:43.947790   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:23:43.947951   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:43.947951   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:23:46.484329   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:23:46.484516   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:47.486564   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:23:49.705624   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:23:49.705882   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:49.705977   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:23:52.246896   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:23:52.246896   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:53.247632   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:23:55.477719   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:23:55.477857   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:55.478013   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:23:58.000021   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:23:58.000094   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:59.002274   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:01.246536   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:01.246915   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:01.246915   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:03.858764   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:24:03.858956   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:04.862506   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:07.118287   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:07.118522   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:07.118522   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:09.790601   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:09.790683   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:09.794401   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:11.935592   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:11.935679   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:11.935734   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:14.568010   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:14.568010   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:14.568483   14940 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 23:24:14.571901   14940 machine.go:88] provisioning docker machine ...
	I1226 23:24:14.572003   14940 buildroot.go:166] provisioning hostname "multinode-455300-m03"
	I1226 23:24:14.572003   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:16.750847   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:16.751079   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:16.751079   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:19.365525   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:19.365525   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:19.372249   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:24:19.372983   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.8 22 <nil> <nil>}
	I1226 23:24:19.372983   14940 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-455300-m03 && echo "multinode-455300-m03" | sudo tee /etc/hostname
	I1226 23:24:19.535509   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-455300-m03
	
	I1226 23:24:19.535509   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:21.763432   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:21.763801   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:21.763941   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:24.393318   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:24.393318   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:24.398934   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:24:24.400213   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.8 22 <nil> <nil>}
	I1226 23:24:24.400213   14940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-455300-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-455300-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-455300-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 23:24:24.554385   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 23:24:24.554385   14940 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1226 23:24:24.554385   14940 buildroot.go:174] setting up certificates
	I1226 23:24:24.554385   14940 provision.go:83] configureAuth start
	I1226 23:24:24.554385   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:26.764966   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:26.764966   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:26.765071   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:29.419669   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:29.420013   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:29.420013   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:31.634509   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:31.634781   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:31.634781   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:34.267201   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:34.267486   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:34.267486   14940 provision.go:138] copyHostCerts
	I1226 23:24:34.267809   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1226 23:24:34.268027   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1226 23:24:34.268027   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1226 23:24:34.268027   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1226 23:24:34.269964   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1226 23:24:34.270055   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1226 23:24:34.270055   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1226 23:24:34.270821   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1226 23:24:34.272153   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1226 23:24:34.272470   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1226 23:24:34.272514   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1226 23:24:34.272782   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I1226 23:24:34.273832   14940 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-455300-m03 san=[172.21.187.8 172.21.187.8 localhost 127.0.0.1 minikube multinode-455300-m03]
	I1226 23:24:34.425530   14940 provision.go:172] copyRemoteCerts
	I1226 23:24:34.440789   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 23:24:34.440789   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:36.585909   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:36.586160   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:36.586261   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:39.176017   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:39.176017   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:39.176239   14940 sshutil.go:53] new ssh client: &{IP:172.21.187.8 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m03\id_rsa Username:docker}
	I1226 23:24:39.285902   14940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8451139s)
	I1226 23:24:39.285997   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1226 23:24:39.286065   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1226 23:24:39.326967   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1226 23:24:39.327243   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 23:24:39.370919   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1226 23:24:39.370950   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I1226 23:24:39.416979   14940 provision.go:86] duration metric: configureAuth took 14.8625977s
	I1226 23:24:39.416979   14940 buildroot.go:189] setting minikube options for container-runtime
	I1226 23:24:39.417596   14940 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:24:39.417596   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:41.607443   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:41.607443   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:41.607745   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:44.219512   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:44.219701   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:44.225555   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:24:44.226273   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.8 22 <nil> <nil>}
	I1226 23:24:44.226273   14940 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1226 23:24:44.366690   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1226 23:24:44.366764   14940 buildroot.go:70] root file system type: tmpfs
	I1226 23:24:44.366963   14940 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1226 23:24:44.367053   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:46.563580   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:46.563788   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:46.563905   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:49.204936   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:49.204936   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:49.210613   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:24:49.212271   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.8 22 <nil> <nil>}
	I1226 23:24:49.212271   14940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.21.182.57"
	Environment="NO_PROXY=172.21.182.57,172.21.184.151"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1226 23:24:49.375656   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.21.182.57
	Environment=NO_PROXY=172.21.182.57,172.21.184.151
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1226 23:24:49.375656   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:51.550122   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:51.550122   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:51.550254   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:54.146951   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:54.146951   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:54.153736   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:24:54.154348   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.8 22 <nil> <nil>}
	I1226 23:24:54.154348   14940 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1226 23:24:55.472481   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1226 23:24:55.472699   14940 machine.go:91] provisioned docker machine in 40.9008068s
	I1226 23:24:55.472699   14940 start.go:300] post-start starting for "multinode-455300-m03" (driver="hyperv")
	I1226 23:24:55.472781   14940 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 23:24:55.486340   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 23:24:55.486340   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:57.618458   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:57.618652   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:57.618652   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:25:00.230146   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:25:00.230146   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:25:00.230489   14940 sshutil.go:53] new ssh client: &{IP:172.21.187.8 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m03\id_rsa Username:docker}
	I1226 23:25:00.344746   14940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8584066s)
	I1226 23:25:00.357947   14940 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 23:25:00.364041   14940 command_runner.go:130] > NAME=Buildroot
	I1226 23:25:00.364041   14940 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I1226 23:25:00.364041   14940 command_runner.go:130] > ID=buildroot
	I1226 23:25:00.364041   14940 command_runner.go:130] > VERSION_ID=2021.02.12
	I1226 23:25:00.364041   14940 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1226 23:25:00.364041   14940 info.go:137] Remote host: Buildroot 2021.02.12
	I1226 23:25:00.364041   14940 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1226 23:25:00.365753   14940 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1226 23:25:00.366888   14940 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> 107282.pem in /etc/ssl/certs
	I1226 23:25:00.366888   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> /etc/ssl/certs/107282.pem
	I1226 23:25:00.380257   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 23:25:00.398587   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /etc/ssl/certs/107282.pem (1708 bytes)
	I1226 23:25:00.440460   14940 start.go:303] post-start completed in 4.9676796s
	I1226 23:25:00.440460   14940 fix.go:56] fixHost completed within 1m23.4385434s
	I1226 23:25:00.440460   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:25:02.664761   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:25:02.664761   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:25:02.664761   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:25:05.296012   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:25:05.296205   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:25:05.302146   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:25:05.302911   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.8 22 <nil> <nil>}
	I1226 23:25:05.302911   14940 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1226 23:25:05.443269   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703633105.440781281
	
	I1226 23:25:05.443269   14940 fix.go:206] guest clock: 1703633105.440781281
	I1226 23:25:05.443269   14940 fix.go:219] Guest: 2023-12-26 23:25:05.440781281 +0000 UTC Remote: 2023-12-26 23:25:00.4404603 +0000 UTC m=+367.264342901 (delta=5.000320981s)
	I1226 23:25:05.443269   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:25:07.653272   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:25:07.653345   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:25:07.653345   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:25:10.305307   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:25:10.305307   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:25:10.311381   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:25:10.311573   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.8 22 <nil> <nil>}
	I1226 23:25:10.312131   14940 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1703633105
	I1226 23:25:10.463031   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 26 23:25:05 UTC 2023
	
	I1226 23:25:10.463031   14940 fix.go:226] clock set: Tue Dec 26 23:25:05 UTC 2023
	 (err=<nil>)
	I1226 23:25:10.463031   14940 start.go:83] releasing machines lock for "multinode-455300-m03", held for 1m33.4612432s
	I1226 23:25:10.463031   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:25:12.692253   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:25:12.692253   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:25:12.692357   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
multinode_test.go:325: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-455300" : exit status 1
multinode_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-455300
multinode_test.go:328: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-455300: context deadline exceeded (274.8µs)
multinode_test.go:330: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-455300" : context deadline exceeded
multinode_test.go:335: reported node list is not the same after restart. Before restart: multinode-455300	172.21.184.4
multinode-455300-m02	172.21.187.58
multinode-455300-m03	172.21.188.21

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-455300 -n multinode-455300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-455300 -n multinode-455300: (12.6202462s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 logs -n 25: (9.2396308s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-455300 cp testdata\cp-test.txt                                                                                 | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:09 UTC | 26 Dec 23 23:09 UTC |
	|         | multinode-455300-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-455300 ssh -n                                                                                                  | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:09 UTC | 26 Dec 23 23:10 UTC |
	|         | multinode-455300-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-455300 cp multinode-455300-m02:/home/docker/cp-test.txt                                                        | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:10 UTC | 26 Dec 23 23:10 UTC |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile3257950645\001\cp-test_multinode-455300-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-455300 ssh -n                                                                                                  | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:10 UTC | 26 Dec 23 23:10 UTC |
	|         | multinode-455300-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-455300 cp multinode-455300-m02:/home/docker/cp-test.txt                                                        | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:10 UTC | 26 Dec 23 23:10 UTC |
	|         | multinode-455300:/home/docker/cp-test_multinode-455300-m02_multinode-455300.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-455300 ssh -n                                                                                                  | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:10 UTC | 26 Dec 23 23:10 UTC |
	|         | multinode-455300-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-455300 ssh -n multinode-455300 sudo cat                                                                        | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:10 UTC | 26 Dec 23 23:10 UTC |
	|         | /home/docker/cp-test_multinode-455300-m02_multinode-455300.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-455300 cp multinode-455300-m02:/home/docker/cp-test.txt                                                        | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:10 UTC | 26 Dec 23 23:11 UTC |
	|         | multinode-455300-m03:/home/docker/cp-test_multinode-455300-m02_multinode-455300-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-455300 ssh -n                                                                                                  | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:11 UTC | 26 Dec 23 23:11 UTC |
	|         | multinode-455300-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-455300 ssh -n multinode-455300-m03 sudo cat                                                                    | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:11 UTC | 26 Dec 23 23:11 UTC |
	|         | /home/docker/cp-test_multinode-455300-m02_multinode-455300-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-455300 cp testdata\cp-test.txt                                                                                 | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:11 UTC | 26 Dec 23 23:11 UTC |
	|         | multinode-455300-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-455300 ssh -n                                                                                                  | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:11 UTC | 26 Dec 23 23:11 UTC |
	|         | multinode-455300-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-455300 cp multinode-455300-m03:/home/docker/cp-test.txt                                                        | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:11 UTC | 26 Dec 23 23:12 UTC |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile3257950645\001\cp-test_multinode-455300-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-455300 ssh -n                                                                                                  | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:12 UTC | 26 Dec 23 23:12 UTC |
	|         | multinode-455300-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-455300 cp multinode-455300-m03:/home/docker/cp-test.txt                                                        | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:12 UTC | 26 Dec 23 23:12 UTC |
	|         | multinode-455300:/home/docker/cp-test_multinode-455300-m03_multinode-455300.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-455300 ssh -n                                                                                                  | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:12 UTC | 26 Dec 23 23:12 UTC |
	|         | multinode-455300-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-455300 ssh -n multinode-455300 sudo cat                                                                        | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:12 UTC | 26 Dec 23 23:12 UTC |
	|         | /home/docker/cp-test_multinode-455300-m03_multinode-455300.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-455300 cp multinode-455300-m03:/home/docker/cp-test.txt                                                        | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:12 UTC | 26 Dec 23 23:13 UTC |
	|         | multinode-455300-m02:/home/docker/cp-test_multinode-455300-m03_multinode-455300-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-455300 ssh -n                                                                                                  | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:13 UTC | 26 Dec 23 23:13 UTC |
	|         | multinode-455300-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-455300 ssh -n multinode-455300-m02 sudo cat                                                                    | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:13 UTC | 26 Dec 23 23:13 UTC |
	|         | /home/docker/cp-test_multinode-455300-m03_multinode-455300-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-455300 node stop m03                                                                                           | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:13 UTC | 26 Dec 23 23:13 UTC |
	| node    | multinode-455300 node start                                                                                              | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:14 UTC | 26 Dec 23 23:16 UTC |
	|         | m03 --alsologtostderr                                                                                                    |                  |                   |         |                     |                     |
	| node    | list -p multinode-455300                                                                                                 | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:17 UTC |                     |
	| stop    | -p multinode-455300                                                                                                      | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:17 UTC | 26 Dec 23 23:18 UTC |
	| start   | -p multinode-455300                                                                                                      | multinode-455300 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:18 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 23:18:53
	Running on machine: minikube1
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 23:18:53.346533   14940 out.go:296] Setting OutFile to fd 1300 ...
	I1226 23:18:53.347534   14940 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 23:18:53.347534   14940 out.go:309] Setting ErrFile to fd 1040...
	I1226 23:18:53.347534   14940 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 23:18:53.369537   14940 out.go:303] Setting JSON to false
	I1226 23:18:53.373524   14940 start.go:128] hostinfo: {"hostname":"minikube1","uptime":7132,"bootTime":1703625601,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1226 23:18:53.373524   14940 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 23:18:53.378526   14940 out.go:177] * [multinode-455300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1226 23:18:53.382543   14940 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:18:53.382543   14940 notify.go:220] Checking for updates...
	I1226 23:18:53.384533   14940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 23:18:53.387532   14940 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1226 23:18:53.390533   14940 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 23:18:53.393534   14940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 23:18:53.396534   14940 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:18:53.396534   14940 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 23:18:58.744736   14940 out.go:177] * Using the hyperv driver based on existing profile
	I1226 23:18:58.748857   14940 start.go:298] selected driver: hyperv
	I1226 23:18:58.749005   14940 start.go:902] validating driver "hyperv" against &{Name:multinode-455300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-455300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.21.184.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.21.187.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.21.188.21 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 23:18:58.749005   14940 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 23:18:58.796052   14940 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 23:18:58.796052   14940 cni.go:84] Creating CNI manager for ""
	I1226 23:18:58.796052   14940 cni.go:136] 3 nodes found, recommending kindnet
	I1226 23:18:58.796052   14940 start_flags.go:323] config:
	{Name:multinode-455300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-455300 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.21.184.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.21.187.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.21.188.21 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 23:18:58.796664   14940 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:18:58.801612   14940 out.go:177] * Starting control plane node multinode-455300 in cluster multinode-455300
	I1226 23:18:58.803805   14940 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 23:18:58.803805   14940 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1226 23:18:58.803805   14940 cache.go:56] Caching tarball of preloaded images
	I1226 23:18:58.804465   14940 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1226 23:18:58.804465   14940 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1226 23:18:58.804465   14940 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 23:18:58.806611   14940 start.go:365] acquiring machines lock for multinode-455300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1226 23:18:58.807196   14940 start.go:369] acquired machines lock for "multinode-455300" in 584.5µs
	I1226 23:18:58.807196   14940 start.go:96] Skipping create...Using existing machine configuration
	I1226 23:18:58.807196   14940 fix.go:54] fixHost starting: 
	I1226 23:18:58.807941   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:01.488243   14940 main.go:141] libmachine: [stdout =====>] : Off
	
	I1226 23:19:01.488243   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:01.489344   14940 fix.go:102] recreateIfNeeded on multinode-455300: state=Stopped err=<nil>
	W1226 23:19:01.489344   14940 fix.go:128] unexpected machine state, will restart: <nil>
	I1226 23:19:01.492322   14940 out.go:177] * Restarting existing hyperv VM for "multinode-455300" ...
	I1226 23:19:01.495927   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-455300
	I1226 23:19:04.532264   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:19:04.532486   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:04.532486   14940 main.go:141] libmachine: Waiting for host to start...
	I1226 23:19:04.532565   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:06.802124   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:06.802306   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:06.802408   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:09.327795   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:19:09.327931   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:10.328604   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:12.554728   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:12.554966   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:12.554966   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:15.131876   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:19:15.132060   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:16.134733   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:18.358398   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:18.358398   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:18.358398   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:20.934107   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:19:20.934167   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:21.947838   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:24.189402   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:24.189402   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:24.189402   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:26.749185   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:19:26.749261   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:27.765373   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:30.031669   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:30.031669   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:30.031896   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:32.664464   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:19:32.664464   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:32.666910   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:34.848664   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:34.848664   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:34.848664   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:37.481469   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:19:37.481469   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:37.481469   14940 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 23:19:37.484746   14940 machine.go:88] provisioning docker machine ...
	I1226 23:19:37.484827   14940 buildroot.go:166] provisioning hostname "multinode-455300"
	I1226 23:19:37.484943   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:39.629726   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:39.629936   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:39.630027   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:42.214437   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:19:42.214437   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:42.221897   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:19:42.222713   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.57 22 <nil> <nil>}
	I1226 23:19:42.222713   14940 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-455300 && echo "multinode-455300" | sudo tee /etc/hostname
	I1226 23:19:42.400370   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-455300
	
	I1226 23:19:42.400910   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:44.562322   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:44.562512   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:44.562512   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:47.131604   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:19:47.131604   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:47.137123   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:19:47.137952   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.57 22 <nil> <nil>}
	I1226 23:19:47.137952   14940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-455300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-455300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-455300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 23:19:47.309743   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 23:19:47.309743   14940 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1226 23:19:47.309743   14940 buildroot.go:174] setting up certificates
	I1226 23:19:47.309743   14940 provision.go:83] configureAuth start
	I1226 23:19:47.309743   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:49.393760   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:49.393760   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:49.393846   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:51.939473   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:19:51.939473   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:51.939574   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:54.064322   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:54.064322   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:54.064322   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:19:56.612771   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:19:56.613102   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:56.613287   14940 provision.go:138] copyHostCerts
	I1226 23:19:56.613287   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1226 23:19:56.613287   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1226 23:19:56.613852   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1226 23:19:56.614347   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1226 23:19:56.615186   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1226 23:19:56.615186   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1226 23:19:56.615186   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1226 23:19:56.616091   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1226 23:19:56.617648   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1226 23:19:56.617816   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1226 23:19:56.617816   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1226 23:19:56.618353   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I1226 23:19:56.619306   14940 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-455300 san=[172.21.182.57 172.21.182.57 localhost 127.0.0.1 minikube multinode-455300]
	I1226 23:19:56.841336   14940 provision.go:172] copyRemoteCerts
	I1226 23:19:56.852386   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 23:19:56.853421   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:19:59.038515   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:19:59.038515   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:19:59.038629   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:01.660897   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:01.660897   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:01.661425   14940 sshutil.go:53] new ssh client: &{IP:172.21.182.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 23:20:01.787347   14940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9348692s)
	I1226 23:20:01.787347   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1226 23:20:01.787347   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 23:20:01.828830   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1226 23:20:01.829541   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I1226 23:20:01.875390   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1226 23:20:01.875921   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1226 23:20:01.917952   14940 provision.go:86] duration metric: configureAuth took 14.6081693s
	I1226 23:20:01.917996   14940 buildroot.go:189] setting minikube options for container-runtime
	I1226 23:20:01.918357   14940 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:20:01.918357   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:04.086062   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:04.086296   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:04.086393   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:06.724968   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:06.724968   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:06.731807   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:20:06.732515   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.57 22 <nil> <nil>}
	I1226 23:20:06.732515   14940 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1226 23:20:06.890061   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1226 23:20:06.890061   14940 buildroot.go:70] root file system type: tmpfs
	I1226 23:20:06.890356   14940 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1226 23:20:06.890470   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:09.059708   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:09.059708   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:09.059917   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:11.717054   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:11.717054   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:11.722954   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:20:11.723678   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.57 22 <nil> <nil>}
	I1226 23:20:11.723678   14940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1226 23:20:11.919192   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1226 23:20:11.919275   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:14.094356   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:14.094356   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:14.094647   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:16.729840   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:16.730179   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:16.736043   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:20:16.736857   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.57 22 <nil> <nil>}
	I1226 23:20:16.736857   14940 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1226 23:20:18.177192   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1226 23:20:18.177192   14940 machine.go:91] provisioned docker machine in 40.6923732s
	I1226 23:20:18.177192   14940 start.go:300] post-start starting for "multinode-455300" (driver="hyperv")
	I1226 23:20:18.177192   14940 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 23:20:18.195070   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 23:20:18.195070   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:20.405793   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:20.406089   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:20.406089   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:23.064278   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:23.064413   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:23.064592   14940 sshutil.go:53] new ssh client: &{IP:172.21.182.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 23:20:23.191586   14940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9964506s)
	I1226 23:20:23.206181   14940 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 23:20:23.212464   14940 command_runner.go:130] > NAME=Buildroot
	I1226 23:20:23.212605   14940 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I1226 23:20:23.212605   14940 command_runner.go:130] > ID=buildroot
	I1226 23:20:23.212605   14940 command_runner.go:130] > VERSION_ID=2021.02.12
	I1226 23:20:23.212753   14940 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1226 23:20:23.212753   14940 info.go:137] Remote host: Buildroot 2021.02.12
	I1226 23:20:23.212861   14940 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1226 23:20:23.213428   14940 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1226 23:20:23.214577   14940 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> 107282.pem in /etc/ssl/certs
	I1226 23:20:23.214577   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> /etc/ssl/certs/107282.pem
	I1226 23:20:23.228436   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 23:20:23.245542   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /etc/ssl/certs/107282.pem (1708 bytes)
	I1226 23:20:23.287478   14940 start.go:303] post-start completed in 5.1102875s
	I1226 23:20:23.287478   14940 fix.go:56] fixHost completed within 1m24.4802995s
	I1226 23:20:23.287478   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:25.487278   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:25.487278   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:25.487278   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:28.112520   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:28.112520   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:28.118459   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:20:28.119246   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.57 22 <nil> <nil>}
	I1226 23:20:28.119391   14940 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1226 23:20:28.273873   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703632828.271046441
	
	I1226 23:20:28.273873   14940 fix.go:206] guest clock: 1703632828.271046441
	I1226 23:20:28.273873   14940 fix.go:219] Guest: 2023-12-26 23:20:28.271046441 +0000 UTC Remote: 2023-12-26 23:20:23.2874786 +0000 UTC m=+90.111289801 (delta=4.983567841s)
	I1226 23:20:28.274010   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:30.467819   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:30.467819   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:30.467819   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:33.076617   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:33.076617   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:33.082201   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:20:33.082991   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.57 22 <nil> <nil>}
	I1226 23:20:33.082991   14940 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1703632828
	I1226 23:20:33.248229   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 26 23:20:28 UTC 2023
	
	I1226 23:20:33.248229   14940 fix.go:226] clock set: Tue Dec 26 23:20:28 UTC 2023
	 (err=<nil>)
	I1226 23:20:33.248229   14940 start.go:83] releasing machines lock for "multinode-455300", held for 1m34.4410521s
	I1226 23:20:33.248770   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:35.389396   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:35.389628   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:35.389628   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:37.982522   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:37.982522   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:37.987492   14940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 23:20:37.987492   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:37.999146   14940 ssh_runner.go:195] Run: cat /version.json
	I1226 23:20:37.999146   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:20:40.220643   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:40.220736   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:40.220815   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:20:40.220815   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:40.220920   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:40.220920   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:20:42.932082   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:42.932281   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:42.932524   14940 sshutil.go:53] new ssh client: &{IP:172.21.182.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 23:20:42.951984   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:20:42.951984   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:20:42.951984   14940 sshutil.go:53] new ssh client: &{IP:172.21.182.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 23:20:43.032777   14940 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702660877-17806", "minikube_version": "v1.32.0", "commit": "957da21b08687cca2533dd65b67e68ead277b79e"}
	I1226 23:20:43.032864   14940 ssh_runner.go:235] Completed: cat /version.json: (5.0337188s)
	I1226 23:20:43.046955   14940 ssh_runner.go:195] Run: systemctl --version
	I1226 23:20:43.138625   14940 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1226 23:20:43.138802   14940 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1511344s)
	I1226 23:20:43.138802   14940 command_runner.go:130] > systemd 247 (247)
	I1226 23:20:43.138891   14940 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1226 23:20:43.152571   14940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 23:20:43.164755   14940 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1226 23:20:43.165509   14940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1226 23:20:43.178888   14940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 23:20:43.203771   14940 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1226 23:20:43.203771   14940 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1226 23:20:43.203771   14940 start.go:475] detecting cgroup driver to use...
	I1226 23:20:43.203771   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 23:20:43.233202   14940 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1226 23:20:43.246065   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1226 23:20:43.277411   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1226 23:20:43.294174   14940 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1226 23:20:43.307588   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1226 23:20:43.336597   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 23:20:43.370359   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1226 23:20:43.400645   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 23:20:43.430141   14940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 23:20:43.461011   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1226 23:20:43.493760   14940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 23:20:43.510041   14940 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1226 23:20:43.523806   14940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 23:20:43.553234   14940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:20:43.724950   14940 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1226 23:20:43.752033   14940 start.go:475] detecting cgroup driver to use...
	I1226 23:20:43.767045   14940 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1226 23:20:43.791954   14940 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1226 23:20:43.791954   14940 command_runner.go:130] > [Unit]
	I1226 23:20:43.791954   14940 command_runner.go:130] > Description=Docker Application Container Engine
	I1226 23:20:43.791954   14940 command_runner.go:130] > Documentation=https://docs.docker.com
	I1226 23:20:43.791954   14940 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1226 23:20:43.791954   14940 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1226 23:20:43.791954   14940 command_runner.go:130] > StartLimitBurst=3
	I1226 23:20:43.791954   14940 command_runner.go:130] > StartLimitIntervalSec=60
	I1226 23:20:43.791954   14940 command_runner.go:130] > [Service]
	I1226 23:20:43.791954   14940 command_runner.go:130] > Type=notify
	I1226 23:20:43.791954   14940 command_runner.go:130] > Restart=on-failure
	I1226 23:20:43.791954   14940 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1226 23:20:43.791954   14940 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1226 23:20:43.791954   14940 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1226 23:20:43.791954   14940 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1226 23:20:43.791954   14940 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1226 23:20:43.791954   14940 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1226 23:20:43.791954   14940 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1226 23:20:43.791954   14940 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1226 23:20:43.791954   14940 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1226 23:20:43.791954   14940 command_runner.go:130] > ExecStart=
	I1226 23:20:43.791954   14940 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1226 23:20:43.791954   14940 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1226 23:20:43.791954   14940 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1226 23:20:43.791954   14940 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1226 23:20:43.791954   14940 command_runner.go:130] > LimitNOFILE=infinity
	I1226 23:20:43.791954   14940 command_runner.go:130] > LimitNPROC=infinity
	I1226 23:20:43.791954   14940 command_runner.go:130] > LimitCORE=infinity
	I1226 23:20:43.791954   14940 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1226 23:20:43.791954   14940 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1226 23:20:43.791954   14940 command_runner.go:130] > TasksMax=infinity
	I1226 23:20:43.791954   14940 command_runner.go:130] > TimeoutStartSec=0
	I1226 23:20:43.791954   14940 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1226 23:20:43.791954   14940 command_runner.go:130] > Delegate=yes
	I1226 23:20:43.791954   14940 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1226 23:20:43.791954   14940 command_runner.go:130] > KillMode=process
	I1226 23:20:43.791954   14940 command_runner.go:130] > [Install]
	I1226 23:20:43.791954   14940 command_runner.go:130] > WantedBy=multi-user.target
	I1226 23:20:43.806265   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 23:20:43.840822   14940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 23:20:43.888855   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 23:20:43.924775   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1226 23:20:43.961013   14940 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1226 23:20:44.022763   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1226 23:20:44.044977   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 23:20:44.076095   14940 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1226 23:20:44.089733   14940 ssh_runner.go:195] Run: which cri-dockerd
	I1226 23:20:44.095990   14940 command_runner.go:130] > /usr/bin/cri-dockerd
	I1226 23:20:44.110463   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1226 23:20:44.129679   14940 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1226 23:20:44.173364   14940 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1226 23:20:44.349002   14940 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1226 23:20:44.513856   14940 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1226 23:20:44.514108   14940 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1226 23:20:44.561218   14940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:20:44.740859   14940 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1226 23:20:46.457974   14940 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.7171149s)
	I1226 23:20:46.475803   14940 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1226 23:20:46.667076   14940 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1226 23:20:46.853889   14940 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1226 23:20:47.032893   14940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:20:47.216598   14940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1226 23:20:47.255948   14940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:20:47.449102   14940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1226 23:20:47.561358   14940 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1226 23:20:47.575209   14940 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1226 23:20:47.583211   14940 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1226 23:20:47.583375   14940 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1226 23:20:47.583375   14940 command_runner.go:130] > Device: 16h/22d	Inode: 898         Links: 1
	I1226 23:20:47.583375   14940 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1226 23:20:47.583451   14940 command_runner.go:130] > Access: 2023-12-26 23:20:47.468923428 +0000
	I1226 23:20:47.583451   14940 command_runner.go:130] > Modify: 2023-12-26 23:20:47.468923428 +0000
	I1226 23:20:47.583480   14940 command_runner.go:130] > Change: 2023-12-26 23:20:47.473923428 +0000
	I1226 23:20:47.583480   14940 command_runner.go:130] >  Birth: -
	I1226 23:20:47.583978   14940 start.go:543] Will wait 60s for crictl version
	I1226 23:20:47.598353   14940 ssh_runner.go:195] Run: which crictl
	I1226 23:20:47.603460   14940 command_runner.go:130] > /usr/bin/crictl
	I1226 23:20:47.616646   14940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 23:20:47.693255   14940 command_runner.go:130] > Version:  0.1.0
	I1226 23:20:47.693336   14940 command_runner.go:130] > RuntimeName:  docker
	I1226 23:20:47.693336   14940 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1226 23:20:47.693336   14940 command_runner.go:130] > RuntimeApiVersion:  v1
	I1226 23:20:47.693443   14940 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1226 23:20:47.704313   14940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1226 23:20:47.739368   14940 command_runner.go:130] > 24.0.7
	I1226 23:20:47.750325   14940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1226 23:20:47.784451   14940 command_runner.go:130] > 24.0.7
	I1226 23:20:47.789113   14940 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1226 23:20:47.789113   14940 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1226 23:20:47.795251   14940 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1226 23:20:47.795502   14940 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1226 23:20:47.795502   14940 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1226 23:20:47.795502   14940 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4e:ec:d4 Flags:up|broadcast|multicast|running}
	I1226 23:20:47.799213   14940 ip.go:210] interface addr: fe80::1f69:6bdb:2000:8fcd/64
	I1226 23:20:47.799213   14940 ip.go:210] interface addr: 172.21.176.1/20
	I1226 23:20:47.811837   14940 ssh_runner.go:195] Run: grep 172.21.176.1	host.minikube.internal$ /etc/hosts
	I1226 23:20:47.818457   14940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.21.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 23:20:47.837668   14940 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 23:20:47.847599   14940 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1226 23:20:47.875233   14940 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1226 23:20:47.875233   14940 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1226 23:20:47.875233   14940 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1226 23:20:47.875233   14940 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1226 23:20:47.875233   14940 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1226 23:20:47.875351   14940 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1226 23:20:47.875351   14940 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1226 23:20:47.875351   14940 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1226 23:20:47.875351   14940 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 23:20:47.875351   14940 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1226 23:20:47.875445   14940 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1226 23:20:47.875445   14940 docker.go:601] Images already preloaded, skipping extraction
	I1226 23:20:47.884322   14940 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1226 23:20:47.909964   14940 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1226 23:20:47.909964   14940 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1226 23:20:47.909964   14940 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1226 23:20:47.909964   14940 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1226 23:20:47.909964   14940 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1226 23:20:47.909964   14940 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1226 23:20:47.909964   14940 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1226 23:20:47.909964   14940 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1226 23:20:47.909964   14940 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 23:20:47.909964   14940 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1226 23:20:47.909964   14940 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1226 23:20:47.909964   14940 cache_images.go:84] Images are preloaded, skipping loading
	I1226 23:20:47.918964   14940 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1226 23:20:47.955003   14940 command_runner.go:130] > cgroupfs
	I1226 23:20:47.955677   14940 cni.go:84] Creating CNI manager for ""
	I1226 23:20:47.956008   14940 cni.go:136] 3 nodes found, recommending kindnet
	I1226 23:20:47.956008   14940 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 23:20:47.956008   14940 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.21.182.57 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-455300 NodeName:multinode-455300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.21.182.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.21.182.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1226 23:20:47.956483   14940 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.21.182.57
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-455300"
	  kubeletExtraArgs:
	    node-ip: 172.21.182.57
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.21.182.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 23:20:47.956759   14940 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-455300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.21.182.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-455300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 23:20:47.970960   14940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1226 23:20:47.989034   14940 command_runner.go:130] > kubeadm
	I1226 23:20:47.989069   14940 command_runner.go:130] > kubectl
	I1226 23:20:47.989069   14940 command_runner.go:130] > kubelet
	I1226 23:20:47.989115   14940 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 23:20:48.002719   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1226 23:20:48.018037   14940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1226 23:20:48.045454   14940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1226 23:20:48.074413   14940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1226 23:20:48.122052   14940 ssh_runner.go:195] Run: grep 172.21.182.57	control-plane.minikube.internal$ /etc/hosts
	I1226 23:20:48.128839   14940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.21.182.57	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 23:20:48.147956   14940 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300 for IP: 172.21.182.57
	I1226 23:20:48.147956   14940 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 23:20:48.148147   14940 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1226 23:20:48.148963   14940 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1226 23:20:48.149858   14940 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\client.key
	I1226 23:20:48.149968   14940 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key.76181380
	I1226 23:20:48.150135   14940 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt.76181380 with IP's: [172.21.182.57 10.96.0.1 127.0.0.1 10.0.0.1]
	I1226 23:20:48.313557   14940 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt.76181380 ...
	I1226 23:20:48.314562   14940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt.76181380: {Name:mk331fe892099c0aec4f61b69d60598dd6a86faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 23:20:48.315586   14940 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key.76181380 ...
	I1226 23:20:48.315586   14940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key.76181380: {Name:mk9ce275ae6084ede4e9476a8540b9bee334314d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 23:20:48.316554   14940 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt.76181380 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt
	I1226 23:20:48.329559   14940 certs.go:341] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key.76181380 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key
	I1226 23:20:48.330575   14940 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.key
	I1226 23:20:48.330575   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1226 23:20:48.330575   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1226 23:20:48.331628   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1226 23:20:48.331628   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1226 23:20:48.332343   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1226 23:20:48.332343   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1226 23:20:48.332343   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1226 23:20:48.332343   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1226 23:20:48.333008   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem (1338 bytes)
	W1226 23:20:48.333008   14940 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728_empty.pem, impossibly tiny 0 bytes
	I1226 23:20:48.333670   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1226 23:20:48.333842   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1226 23:20:48.333842   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1226 23:20:48.334475   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1226 23:20:48.335066   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem (1708 bytes)
	I1226 23:20:48.335380   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> /usr/share/ca-certificates/107282.pem
	I1226 23:20:48.335380   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:20:48.335962   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem -> /usr/share/ca-certificates/10728.pem
	I1226 23:20:48.336667   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1226 23:20:48.380971   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1226 23:20:48.424521   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1226 23:20:48.463972   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1226 23:20:48.513727   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 23:20:48.556351   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 23:20:48.596579   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 23:20:48.640632   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1226 23:20:48.681514   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /usr/share/ca-certificates/107282.pem (1708 bytes)
	I1226 23:20:48.720880   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 23:20:48.760674   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem --> /usr/share/ca-certificates/10728.pem (1338 bytes)
	I1226 23:20:48.800891   14940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1226 23:20:48.842926   14940 ssh_runner.go:195] Run: openssl version
	I1226 23:20:48.853228   14940 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1226 23:20:48.866966   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 23:20:48.898433   14940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:20:48.905828   14940 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 26 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:20:48.905828   14940 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:20:48.919244   14940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:20:48.926722   14940 command_runner.go:130] > b5213941
	I1226 23:20:48.940737   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 23:20:48.972434   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10728.pem && ln -fs /usr/share/ca-certificates/10728.pem /etc/ssl/certs/10728.pem"
	I1226 23:20:49.005627   14940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10728.pem
	I1226 23:20:49.012588   14940 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 26 22:04 /usr/share/ca-certificates/10728.pem
	I1226 23:20:49.012588   14940 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 26 22:04 /usr/share/ca-certificates/10728.pem
	I1226 23:20:49.025587   14940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10728.pem
	I1226 23:20:49.032630   14940 command_runner.go:130] > 51391683
	I1226 23:20:49.049709   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10728.pem /etc/ssl/certs/51391683.0"
	I1226 23:20:49.082696   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107282.pem && ln -fs /usr/share/ca-certificates/107282.pem /etc/ssl/certs/107282.pem"
	I1226 23:20:49.113188   14940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107282.pem
	I1226 23:20:49.119749   14940 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 26 22:04 /usr/share/ca-certificates/107282.pem
	I1226 23:20:49.120581   14940 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 26 22:04 /usr/share/ca-certificates/107282.pem
	I1226 23:20:49.133231   14940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107282.pem
	I1226 23:20:49.141894   14940 command_runner.go:130] > 3ec20f2e
	I1226 23:20:49.155041   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107282.pem /etc/ssl/certs/3ec20f2e.0"
	I1226 23:20:49.186181   14940 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 23:20:49.193198   14940 command_runner.go:130] > ca.crt
	I1226 23:20:49.193198   14940 command_runner.go:130] > ca.key
	I1226 23:20:49.193198   14940 command_runner.go:130] > healthcheck-client.crt
	I1226 23:20:49.193198   14940 command_runner.go:130] > healthcheck-client.key
	I1226 23:20:49.193198   14940 command_runner.go:130] > peer.crt
	I1226 23:20:49.193198   14940 command_runner.go:130] > peer.key
	I1226 23:20:49.193198   14940 command_runner.go:130] > server.crt
	I1226 23:20:49.193198   14940 command_runner.go:130] > server.key
	I1226 23:20:49.206310   14940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1226 23:20:49.215198   14940 command_runner.go:130] > Certificate will not expire
	I1226 23:20:49.227278   14940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1226 23:20:49.235982   14940 command_runner.go:130] > Certificate will not expire
	I1226 23:20:49.249364   14940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1226 23:20:49.256305   14940 command_runner.go:130] > Certificate will not expire
	I1226 23:20:49.269842   14940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1226 23:20:49.278009   14940 command_runner.go:130] > Certificate will not expire
	I1226 23:20:49.293716   14940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1226 23:20:49.303323   14940 command_runner.go:130] > Certificate will not expire
	I1226 23:20:49.316009   14940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1226 23:20:49.324851   14940 command_runner.go:130] > Certificate will not expire
	I1226 23:20:49.325787   14940 kubeadm.go:404] StartCluster: {Name:multinode-455300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-455300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.21.182.57 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.21.187.58 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.21.188.21 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 23:20:49.336039   14940 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1226 23:20:49.380428   14940 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1226 23:20:49.399417   14940 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1226 23:20:49.399417   14940 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1226 23:20:49.399417   14940 command_runner.go:130] > /var/lib/minikube/etcd:
	I1226 23:20:49.399417   14940 command_runner.go:130] > member
	I1226 23:20:49.399417   14940 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1226 23:20:49.399417   14940 kubeadm.go:636] restartCluster start
	I1226 23:20:49.412418   14940 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1226 23:20:49.427741   14940 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1226 23:20:49.428800   14940 kubeconfig.go:135] verify returned: extract IP: "multinode-455300" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:20:49.429154   14940 kubeconfig.go:146] "multinode-455300" context is missing from C:\Users\jenkins.minikube1\minikube-integration\kubeconfig - will repair!
	I1226 23:20:49.429376   14940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 23:20:49.443034   14940 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:20:49.444020   14940 kapi.go:59] client config for multinode-455300: &rest.Config{Host:"https://172.21.182.57:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 23:20:49.446115   14940 cert_rotation.go:137] Starting client certificate rotation controller
	I1226 23:20:49.457631   14940 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1226 23:20:49.476514   14940 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I1226 23:20:49.476514   14940 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I1226 23:20:49.476514   14940 command_runner.go:130] > @@ -1,7 +1,7 @@
	I1226 23:20:49.476514   14940 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I1226 23:20:49.476514   14940 command_runner.go:130] >  kind: InitConfiguration
	I1226 23:20:49.476514   14940 command_runner.go:130] >  localAPIEndpoint:
	I1226 23:20:49.476514   14940 command_runner.go:130] > -  advertiseAddress: 172.21.184.4
	I1226 23:20:49.476514   14940 command_runner.go:130] > +  advertiseAddress: 172.21.182.57
	I1226 23:20:49.476514   14940 command_runner.go:130] >    bindPort: 8443
	I1226 23:20:49.476514   14940 command_runner.go:130] >  bootstrapTokens:
	I1226 23:20:49.476514   14940 command_runner.go:130] >    - groups:
	I1226 23:20:49.476514   14940 command_runner.go:130] > @@ -14,13 +14,13 @@
	I1226 23:20:49.476514   14940 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I1226 23:20:49.476514   14940 command_runner.go:130] >    name: "multinode-455300"
	I1226 23:20:49.476514   14940 command_runner.go:130] >    kubeletExtraArgs:
	I1226 23:20:49.476514   14940 command_runner.go:130] > -    node-ip: 172.21.184.4
	I1226 23:20:49.476514   14940 command_runner.go:130] > +    node-ip: 172.21.182.57
	I1226 23:20:49.476514   14940 command_runner.go:130] >    taints: []
	I1226 23:20:49.476514   14940 command_runner.go:130] >  ---
	I1226 23:20:49.476514   14940 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I1226 23:20:49.476514   14940 command_runner.go:130] >  kind: ClusterConfiguration
	I1226 23:20:49.476514   14940 command_runner.go:130] >  apiServer:
	I1226 23:20:49.476514   14940 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.21.184.4"]
	I1226 23:20:49.476514   14940 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.21.182.57"]
	I1226 23:20:49.476514   14940 command_runner.go:130] >    extraArgs:
	I1226 23:20:49.476514   14940 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I1226 23:20:49.476514   14940 command_runner.go:130] >  controllerManager:
	I1226 23:20:49.476514   14940 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.21.184.4
	+  advertiseAddress: 172.21.182.57
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-455300"
	   kubeletExtraArgs:
	-    node-ip: 172.21.184.4
	+    node-ip: 172.21.182.57
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.21.184.4"]
	+  certSANs: ["127.0.0.1", "localhost", "172.21.182.57"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I1226 23:20:49.476514   14940 kubeadm.go:1135] stopping kube-system containers ...
	I1226 23:20:49.487715   14940 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1226 23:20:49.517857   14940 command_runner.go:130] > 5944000e150d
	I1226 23:20:49.518750   14940 command_runner.go:130] > c49ce5a60988
	I1226 23:20:49.518750   14940 command_runner.go:130] > 94c58afb0b3a
	I1226 23:20:49.518750   14940 command_runner.go:130] > 58a2f8149f7f
	I1226 23:20:49.518750   14940 command_runner.go:130] > 5e6fbedb8b41
	I1226 23:20:49.518750   14940 command_runner.go:130] > de1e7a6bed71
	I1226 23:20:49.518750   14940 command_runner.go:130] > 6374d63f4880
	I1226 23:20:49.518750   14940 command_runner.go:130] > e74bc4380f45
	I1226 23:20:49.518750   14940 command_runner.go:130] > 2c33bdd1003a
	I1226 23:20:49.518750   14940 command_runner.go:130] > 239b6c40fa39
	I1226 23:20:49.518750   14940 command_runner.go:130] > 9a1fd87d0726
	I1226 23:20:49.518750   14940 command_runner.go:130] > 0d2ca397ea4b
	I1226 23:20:49.518750   14940 command_runner.go:130] > dd32942a9720
	I1226 23:20:49.518750   14940 command_runner.go:130] > 2303b2b6305d
	I1226 23:20:49.518750   14940 command_runner.go:130] > f18330f939ce
	I1226 23:20:49.518750   14940 command_runner.go:130] > d6f5bd631857
	I1226 23:20:49.519056   14940 docker.go:469] Stopping containers: [5944000e150d c49ce5a60988 94c58afb0b3a 58a2f8149f7f 5e6fbedb8b41 de1e7a6bed71 6374d63f4880 e74bc4380f45 2c33bdd1003a 239b6c40fa39 9a1fd87d0726 0d2ca397ea4b dd32942a9720 2303b2b6305d f18330f939ce d6f5bd631857]
	I1226 23:20:49.530540   14940 ssh_runner.go:195] Run: docker stop 5944000e150d c49ce5a60988 94c58afb0b3a 58a2f8149f7f 5e6fbedb8b41 de1e7a6bed71 6374d63f4880 e74bc4380f45 2c33bdd1003a 239b6c40fa39 9a1fd87d0726 0d2ca397ea4b dd32942a9720 2303b2b6305d f18330f939ce d6f5bd631857
	I1226 23:20:49.560278   14940 command_runner.go:130] > 5944000e150d
	I1226 23:20:49.560331   14940 command_runner.go:130] > c49ce5a60988
	I1226 23:20:49.560331   14940 command_runner.go:130] > 94c58afb0b3a
	I1226 23:20:49.560331   14940 command_runner.go:130] > 58a2f8149f7f
	I1226 23:20:49.560391   14940 command_runner.go:130] > 5e6fbedb8b41
	I1226 23:20:49.560391   14940 command_runner.go:130] > de1e7a6bed71
	I1226 23:20:49.560391   14940 command_runner.go:130] > 6374d63f4880
	I1226 23:20:49.560391   14940 command_runner.go:130] > e74bc4380f45
	I1226 23:20:49.560429   14940 command_runner.go:130] > 2c33bdd1003a
	I1226 23:20:49.560429   14940 command_runner.go:130] > 239b6c40fa39
	I1226 23:20:49.560429   14940 command_runner.go:130] > 9a1fd87d0726
	I1226 23:20:49.560429   14940 command_runner.go:130] > 0d2ca397ea4b
	I1226 23:20:49.560429   14940 command_runner.go:130] > dd32942a9720
	I1226 23:20:49.560429   14940 command_runner.go:130] > 2303b2b6305d
	I1226 23:20:49.560429   14940 command_runner.go:130] > f18330f939ce
	I1226 23:20:49.560429   14940 command_runner.go:130] > d6f5bd631857
	I1226 23:20:49.573611   14940 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1226 23:20:49.613888   14940 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1226 23:20:49.631181   14940 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1226 23:20:49.631181   14940 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1226 23:20:49.631181   14940 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1226 23:20:49.631181   14940 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 23:20:49.631181   14940 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 23:20:49.645688   14940 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1226 23:20:49.662008   14940 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1226 23:20:49.662046   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1226 23:20:50.077512   14940 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1226 23:20:50.077595   14940 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1226 23:20:50.077595   14940 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1226 23:20:50.077595   14940 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1226 23:20:50.077639   14940 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1226 23:20:50.077673   14940 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1226 23:20:50.077673   14940 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1226 23:20:50.077673   14940 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1226 23:20:50.077673   14940 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1226 23:20:50.077673   14940 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1226 23:20:50.077673   14940 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1226 23:20:50.077673   14940 command_runner.go:130] > [certs] Using the existing "sa" key
	I1226 23:20:50.077673   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1226 23:20:51.599852   14940 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1226 23:20:51.599852   14940 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1226 23:20:51.599852   14940 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1226 23:20:51.599852   14940 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1226 23:20:51.600011   14940 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1226 23:20:51.600011   14940 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.5223381s)
	I1226 23:20:51.600011   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1226 23:20:51.877274   14940 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 23:20:51.877432   14940 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 23:20:51.877432   14940 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1226 23:20:51.877523   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1226 23:20:51.974976   14940 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1226 23:20:51.974976   14940 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1226 23:20:51.974976   14940 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1226 23:20:51.974976   14940 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1226 23:20:51.974976   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1226 23:20:52.063360   14940 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1226 23:20:52.063360   14940 api_server.go:52] waiting for apiserver process to appear ...
	I1226 23:20:52.076364   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:20:52.588295   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:20:53.083794   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:20:53.582975   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:20:54.092938   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:20:54.587224   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:20:55.091307   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:20:55.584622   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:20:55.612009   14940 command_runner.go:130] > 1851
	I1226 23:20:55.612009   14940 api_server.go:72] duration metric: took 3.5486495s to wait for apiserver process to appear ...
	I1226 23:20:55.612167   14940 api_server.go:88] waiting for apiserver healthz status ...
	I1226 23:20:55.612167   14940 api_server.go:253] Checking apiserver healthz at https://172.21.182.57:8443/healthz ...
	I1226 23:20:59.671666   14940 api_server.go:279] https://172.21.182.57:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1226 23:20:59.671666   14940 api_server.go:103] status: https://172.21.182.57:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1226 23:20:59.672026   14940 api_server.go:253] Checking apiserver healthz at https://172.21.182.57:8443/healthz ...
	I1226 23:20:59.714417   14940 api_server.go:279] https://172.21.182.57:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1226 23:20:59.714914   14940 api_server.go:103] status: https://172.21.182.57:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1226 23:21:00.119743   14940 api_server.go:253] Checking apiserver healthz at https://172.21.182.57:8443/healthz ...
	I1226 23:21:00.128570   14940 api_server.go:279] https://172.21.182.57:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1226 23:21:00.128687   14940 api_server.go:103] status: https://172.21.182.57:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1226 23:21:00.618095   14940 api_server.go:253] Checking apiserver healthz at https://172.21.182.57:8443/healthz ...
	I1226 23:21:00.634132   14940 api_server.go:279] https://172.21.182.57:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1226 23:21:00.634578   14940 api_server.go:103] status: https://172.21.182.57:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1226 23:21:01.124147   14940 api_server.go:253] Checking apiserver healthz at https://172.21.182.57:8443/healthz ...
	I1226 23:21:01.140402   14940 api_server.go:279] https://172.21.182.57:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1226 23:21:01.140663   14940 api_server.go:103] status: https://172.21.182.57:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1226 23:21:01.615956   14940 api_server.go:253] Checking apiserver healthz at https://172.21.182.57:8443/healthz ...
	I1226 23:21:01.625174   14940 api_server.go:279] https://172.21.182.57:8443/healthz returned 200:
	ok
	I1226 23:21:01.625562   14940 round_trippers.go:463] GET https://172.21.182.57:8443/version
	I1226 23:21:01.625562   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:01.625562   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:01.625562   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:01.645153   14940 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I1226 23:21:01.645153   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:01.645153   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:01.645153   14940 round_trippers.go:580]     Content-Length: 264
	I1226 23:21:01.645153   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:01 GMT
	I1226 23:21:01.645153   14940 round_trippers.go:580]     Audit-Id: 2e623e6d-fa5a-4564-bba2-14c0b0936dfc
	I1226 23:21:01.646102   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:01.646102   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:01.646102   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:01.646102   14940 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1226 23:21:01.646347   14940 api_server.go:141] control plane version: v1.28.4
	I1226 23:21:01.646539   14940 api_server.go:131] duration metric: took 6.0343739s to wait for apiserver health ...
	I1226 23:21:01.646539   14940 cni.go:84] Creating CNI manager for ""
	I1226 23:21:01.646539   14940 cni.go:136] 3 nodes found, recommending kindnet
	I1226 23:21:01.650361   14940 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1226 23:21:01.666280   14940 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1226 23:21:01.673678   14940 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1226 23:21:01.673732   14940 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1226 23:21:01.673732   14940 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1226 23:21:01.673732   14940 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 23:21:01.673732   14940 command_runner.go:130] > Access: 2023-12-26 23:19:30.718927400 +0000
	I1226 23:21:01.673732   14940 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I1226 23:21:01.673809   14940 command_runner.go:130] > Change: 2023-12-26 23:19:18.490000000 +0000
	I1226 23:21:01.673809   14940 command_runner.go:130] >  Birth: -
	I1226 23:21:01.675152   14940 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1226 23:21:01.675152   14940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1226 23:21:01.726705   14940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1226 23:21:03.860598   14940 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1226 23:21:03.861423   14940 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1226 23:21:03.861423   14940 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1226 23:21:03.861483   14940 command_runner.go:130] > daemonset.apps/kindnet configured
	I1226 23:21:03.861483   14940 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.1347786s)
	I1226 23:21:03.861573   14940 system_pods.go:43] waiting for kube-system pods to appear ...
	I1226 23:21:03.861768   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods
	I1226 23:21:03.861768   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:03.861851   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:03.861851   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:03.867879   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:21:03.867879   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:03.867943   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:03.867943   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:03.867943   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:03 GMT
	I1226 23:21:03.867943   14940 round_trippers.go:580]     Audit-Id: ffffb957-0c6c-41bc-ac27-8354fa858ef7
	I1226 23:21:03.867943   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:03.867943   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:03.869958   14940 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1722"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84131 chars]
	I1226 23:21:03.876434   14940 system_pods.go:59] 12 kube-system pods found
	I1226 23:21:03.876434   14940 system_pods.go:61] "coredns-5dd5756b68-fj9bd" [fbc5229e-2af2-4e17-b23c-ebf836a42aa2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1226 23:21:03.876434   14940 system_pods.go:61] "etcd-multinode-455300" [cfd4d580-b2a8-4ff1-8d3c-c6b50f9bf86e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1226 23:21:03.876434   14940 system_pods.go:61] "kindnet-8jsvj" [376eb267-ce7d-4497-a85e-ff9224a25347] Running
	I1226 23:21:03.876434   14940 system_pods.go:61] "kindnet-zt55b" [43604859-483f-4e92-a16c-d3f30cb6e4f1] Running
	I1226 23:21:03.876434   14940 system_pods.go:61] "kindnet-zxd45" [686e296b-23ae-4a1e-bc14-2dea164b0c29] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1226 23:21:03.876434   14940 system_pods.go:61] "kube-apiserver-multinode-455300" [bbe5516b-f745-4a20-8df3-3cd3ac15d7f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1226 23:21:03.876434   14940 system_pods.go:61] "kube-controller-manager-multinode-455300" [fdaf236b-e792-4278-908c-34b337b97beb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1226 23:21:03.876434   14940 system_pods.go:61] "kube-proxy-2pfcl" [61b5d2fb-802c-4b84-b7fa-7a7e9e024028] Running
	I1226 23:21:03.876434   14940 system_pods.go:61] "kube-proxy-bqlf8" [1caff24c-909f-42a9-a4b8-d9c8c1ec8828] Running
	I1226 23:21:03.876434   14940 system_pods.go:61] "kube-proxy-hzcqb" [0027fd42-fa64-4d1d-acc8-36e7b41e4838] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1226 23:21:03.876434   14940 system_pods.go:61] "kube-scheduler-multinode-455300" [58252b0c-41ef-43ab-b2e8-4bd2a1b21cb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1226 23:21:03.876434   14940 system_pods.go:61] "storage-provisioner" [e274f19d-1940-400d-b887-aaf390e64fdd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1226 23:21:03.876434   14940 system_pods.go:74] duration metric: took 14.8608ms to wait for pod list to return data ...
	I1226 23:21:03.876434   14940 node_conditions.go:102] verifying NodePressure condition ...
	I1226 23:21:03.876975   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes
	I1226 23:21:03.876975   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:03.876975   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:03.876975   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:03.880778   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:03.881202   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:03.881202   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:03.881202   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:03 GMT
	I1226 23:21:03.881202   14940 round_trippers.go:580]     Audit-Id: 60c541fe-1ba1-45b4-aeae-9f94ac186852
	I1226 23:21:03.881202   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:03.881202   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:03.881202   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:03.881596   14940 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1722"},"items":[{"metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14857 chars]
	I1226 23:21:03.883217   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:21:03.883309   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:21:03.883309   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:21:03.883400   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:21:03.883400   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:21:03.883444   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:21:03.883444   14940 node_conditions.go:105] duration metric: took 7.01ms to run NodePressure ...
	I1226 23:21:03.883515   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1226 23:21:04.279641   14940 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1226 23:21:04.279723   14940 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1226 23:21:04.279792   14940 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1226 23:21:04.280023   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1226 23:21:04.280063   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.280063   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.280063   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.284745   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:04.285084   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.285084   14940 round_trippers.go:580]     Audit-Id: 52e8e3a9-24c4-44bb-a58e-075468a5ab79
	I1226 23:21:04.285084   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.285084   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.285084   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.285171   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.285171   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.285955   14940 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1724"},"items":[{"metadata":{"name":"etcd-multinode-455300","namespace":"kube-system","uid":"cfd4d580-b2a8-4ff1-8d3c-c6b50f9bf86e","resourceVersion":"1717","creationTimestamp":"2023-12-26T23:21:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.21.182.57:2379","kubernetes.io/config.hash":"d4e365efddd1bd5d58716ef3ab8705bc","kubernetes.io/config.mirror":"d4e365efddd1bd5d58716ef3ab8705bc","kubernetes.io/config.seen":"2023-12-26T23:20:52.614240428Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:21:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29350 chars]
	I1226 23:21:04.287435   14940 kubeadm.go:787] kubelet initialised
	I1226 23:21:04.287488   14940 kubeadm.go:788] duration metric: took 7.6597ms waiting for restarted kubelet to initialise ...
	I1226 23:21:04.287488   14940 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 23:21:04.287611   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods
	I1226 23:21:04.287694   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.287694   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.287694   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.297261   14940 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1226 23:21:04.297261   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.297261   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.297261   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.297261   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.297261   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.297261   14940 round_trippers.go:580]     Audit-Id: d994bc68-69cd-4473-b10d-bc2eaa017000
	I1226 23:21:04.297261   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.298249   14940 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1724"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84131 chars]
	I1226 23:21:04.303531   14940 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:04.303717   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:04.303717   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.303717   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.303717   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.308476   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:04.309418   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.309418   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.309418   14940 round_trippers.go:580]     Audit-Id: 94880398-21a1-4ab8-bcbf-53875901d606
	I1226 23:21:04.309418   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.309418   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.309418   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.309418   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.309418   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:04.310478   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:04.310478   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.310478   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.310478   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.313876   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:04.313876   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.313876   14940 round_trippers.go:580]     Audit-Id: 4e73f529-0f8b-4087-9bc9-d2c591dec233
	I1226 23:21:04.313876   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.313876   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.313876   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.313876   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.313876   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.313876   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:04.314884   14940 pod_ready.go:97] node "multinode-455300" hosting pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:04.314884   14940 pod_ready.go:81] duration metric: took 11.2749ms waiting for pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace to be "Ready" ...
	E1226 23:21:04.314884   14940 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-455300" hosting pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:04.314884   14940 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:04.314884   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-455300
	I1226 23:21:04.314884   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.314884   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.314884   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.318961   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:04.319319   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.319319   14940 round_trippers.go:580]     Audit-Id: 2b18f8e8-a7fc-4cfe-b52a-7cefa4e85b39
	I1226 23:21:04.319319   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.319319   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.319319   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.319319   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.319319   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.319575   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-455300","namespace":"kube-system","uid":"cfd4d580-b2a8-4ff1-8d3c-c6b50f9bf86e","resourceVersion":"1717","creationTimestamp":"2023-12-26T23:21:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.21.182.57:2379","kubernetes.io/config.hash":"d4e365efddd1bd5d58716ef3ab8705bc","kubernetes.io/config.mirror":"d4e365efddd1bd5d58716ef3ab8705bc","kubernetes.io/config.seen":"2023-12-26T23:20:52.614240428Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:21:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I1226 23:21:04.319644   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:04.319644   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.319644   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.319644   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.323255   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:04.324117   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.324117   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.324117   14940 round_trippers.go:580]     Audit-Id: bb4e9307-d014-4826-bfa5-51df6c8a614d
	I1226 23:21:04.324117   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.324117   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.324117   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.324117   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.324486   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:04.324624   14940 pod_ready.go:97] node "multinode-455300" hosting pod "etcd-multinode-455300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:04.324624   14940 pod_ready.go:81] duration metric: took 9.7405ms waiting for pod "etcd-multinode-455300" in "kube-system" namespace to be "Ready" ...
	E1226 23:21:04.324624   14940 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-455300" hosting pod "etcd-multinode-455300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:04.324624   14940 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:04.324624   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-455300
	I1226 23:21:04.324624   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.324624   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.324624   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.328209   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:04.328209   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.328209   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.328209   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.328209   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.328209   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.328209   14940 round_trippers.go:580]     Audit-Id: 4ac558de-73a9-4fa2-8a3a-cd4da867bc95
	I1226 23:21:04.328209   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.328209   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-455300","namespace":"kube-system","uid":"bbe5516b-f745-4a20-8df3-3cd3ac15d7f6","resourceVersion":"1718","creationTimestamp":"2023-12-26T23:21:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.21.182.57:8443","kubernetes.io/config.hash":"3c3021eb66c7ec14c35bcec06843a329","kubernetes.io/config.mirror":"3c3021eb66c7ec14c35bcec06843a329","kubernetes.io/config.seen":"2023-12-26T23:20:52.614245928Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:21:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7644 chars]
	I1226 23:21:04.329215   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:04.329215   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.329215   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.329215   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.333214   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:04.333327   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.333327   14940 round_trippers.go:580]     Audit-Id: 6b5291d2-7aa6-48a5-ba78-504d1b1a392f
	I1226 23:21:04.333327   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.333401   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.333401   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.333401   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.333401   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.333401   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:04.333939   14940 pod_ready.go:97] node "multinode-455300" hosting pod "kube-apiserver-multinode-455300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:04.334100   14940 pod_ready.go:81] duration metric: took 9.4758ms waiting for pod "kube-apiserver-multinode-455300" in "kube-system" namespace to be "Ready" ...
	E1226 23:21:04.334100   14940 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-455300" hosting pod "kube-apiserver-multinode-455300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:04.334100   14940 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:04.334185   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-455300
	I1226 23:21:04.334249   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.334249   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.334295   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.339452   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:21:04.339452   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.339452   14940 round_trippers.go:580]     Audit-Id: b21030d5-2e33-48df-b8cf-af15c479cdf3
	I1226 23:21:04.339452   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.339993   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.339993   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.339993   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.339993   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.340359   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-455300","namespace":"kube-system","uid":"fdaf236b-e792-4278-908c-34b337b97beb","resourceVersion":"1710","creationTimestamp":"2023-12-26T22:58:13Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"390e7b7338e932006306396347e13bcf","kubernetes.io/config.mirror":"390e7b7338e932006306396347e13bcf","kubernetes.io/config.seen":"2023-12-26T22:58:06.456140564Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1226 23:21:04.340963   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:04.341003   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.341044   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.341044   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.344820   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:04.344820   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.344877   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.344877   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.344877   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.344918   14940 round_trippers.go:580]     Audit-Id: bb155b3b-b54c-4535-9762-5a011f8faf6b
	I1226 23:21:04.344918   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.344918   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.345882   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:04.346319   14940 pod_ready.go:97] node "multinode-455300" hosting pod "kube-controller-manager-multinode-455300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:04.346319   14940 pod_ready.go:81] duration metric: took 12.2191ms waiting for pod "kube-controller-manager-multinode-455300" in "kube-system" namespace to be "Ready" ...
	E1226 23:21:04.346319   14940 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-455300" hosting pod "kube-controller-manager-multinode-455300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:04.346319   14940 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2pfcl" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:04.481373   14940 request.go:629] Waited for 134.7552ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pfcl
	I1226 23:21:04.481535   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pfcl
	I1226 23:21:04.481535   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.481535   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.481535   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.487102   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:21:04.487102   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.487102   14940 round_trippers.go:580]     Audit-Id: 83225397-ba7b-40c6-9cca-3e80ab93ddcb
	I1226 23:21:04.487649   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.487649   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.487649   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.487649   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.487649   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.488267   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2pfcl","generateName":"kube-proxy-","namespace":"kube-system","uid":"61b5d2fb-802c-4b84-b7fa-7a7e9e024028","resourceVersion":"1631","creationTimestamp":"2023-12-26T23:06:10Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:06:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I1226 23:21:04.684794   14940 request.go:629] Waited for 195.8092ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m03
	I1226 23:21:04.685138   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m03
	I1226 23:21:04.685138   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.685138   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.685138   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.691758   14940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 23:21:04.691864   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.691864   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.691864   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.691864   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.691934   14940 round_trippers.go:580]     Audit-Id: 0d223557-b159-4995-900f-83c9b094ee2d
	I1226 23:21:04.691934   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.691934   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.691934   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m03","uid":"ef364efe-5dc7-4fb4-bc7c-76a3eaa41ba4","resourceVersion":"1649","creationTimestamp":"2023-12-26T23:16:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_16_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:16:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3636 chars]
	I1226 23:21:04.692463   14940 pod_ready.go:92] pod "kube-proxy-2pfcl" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:04.692463   14940 pod_ready.go:81] duration metric: took 346.1436ms waiting for pod "kube-proxy-2pfcl" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:04.692463   14940 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bqlf8" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:04.889510   14940 request.go:629] Waited for 196.7269ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqlf8
	I1226 23:21:04.889871   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqlf8
	I1226 23:21:04.889907   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:04.889907   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:04.889978   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:04.894314   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:04.894314   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:04.894314   14940 round_trippers.go:580]     Audit-Id: 7cbd3e3f-fdd5-4521-88f2-83ba565b6a4e
	I1226 23:21:04.894314   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:04.894930   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:04.894930   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:04.894930   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:04.894930   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:04 GMT
	I1226 23:21:04.895357   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bqlf8","generateName":"kube-proxy-","namespace":"kube-system","uid":"1caff24c-909f-42a9-a4b8-d9c8c1ec8828","resourceVersion":"635","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I1226 23:21:05.093494   14940 request.go:629] Waited for 197.2379ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:21:05.093629   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:21:05.093629   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:05.093629   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:05.093682   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:05.097038   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:05.097038   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:05.097038   14940 round_trippers.go:580]     Audit-Id: b6f98693-ab5b-43c5-b2eb-76034e9076e8
	I1226 23:21:05.097038   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:05.097038   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:05.097038   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:05.097038   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:05.097038   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:05 GMT
	I1226 23:21:05.097694   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"1620","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_16_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3819 chars]
	I1226 23:21:05.098217   14940 pod_ready.go:92] pod "kube-proxy-bqlf8" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:05.098217   14940 pod_ready.go:81] duration metric: took 405.7545ms waiting for pod "kube-proxy-bqlf8" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:05.098336   14940 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hzcqb" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:05.283326   14940 request.go:629] Waited for 184.727ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzcqb
	I1226 23:21:05.283447   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzcqb
	I1226 23:21:05.283447   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:05.283600   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:05.283692   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:05.287354   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:05.287354   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:05.287354   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:05.288238   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:05 GMT
	I1226 23:21:05.288238   14940 round_trippers.go:580]     Audit-Id: d370c4f1-ca77-4cef-8c37-68de8a734069
	I1226 23:21:05.288238   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:05.288238   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:05.288311   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:05.288660   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hzcqb","generateName":"kube-proxy-","namespace":"kube-system","uid":"0027fd42-fa64-4d1d-acc8-36e7b41e4838","resourceVersion":"1715","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5929 chars]
	I1226 23:21:05.488850   14940 request.go:629] Waited for 198.913ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:05.488850   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:05.488850   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:05.488850   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:05.488850   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:05.496833   14940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 23:21:05.496833   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:05.496833   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:05.496833   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:05.496833   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:05 GMT
	I1226 23:21:05.496833   14940 round_trippers.go:580]     Audit-Id: 94666ccb-a760-4f83-9be8-31e08a69e36a
	I1226 23:21:05.496833   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:05.496833   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:05.496833   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:05.497830   14940 pod_ready.go:97] node "multinode-455300" hosting pod "kube-proxy-hzcqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:05.497830   14940 pod_ready.go:81] duration metric: took 399.4942ms waiting for pod "kube-proxy-hzcqb" in "kube-system" namespace to be "Ready" ...
	E1226 23:21:05.497830   14940 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-455300" hosting pod "kube-proxy-hzcqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:05.497830   14940 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:05.695128   14940 request.go:629] Waited for 197.0316ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-455300
	I1226 23:21:05.695276   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-455300
	I1226 23:21:05.695276   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:05.695276   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:05.695276   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:05.698894   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:05.698894   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:05.698894   14940 round_trippers.go:580]     Audit-Id: 2121d241-7895-4b34-ac5f-3ebabe01122e
	I1226 23:21:05.698894   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:05.698894   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:05.698894   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:05.698894   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:05.698894   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:05 GMT
	I1226 23:21:05.698894   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-455300","namespace":"kube-system","uid":"58252b0c-41ef-43ab-b2e8-4bd2a1b21cb1","resourceVersion":"1711","creationTimestamp":"2023-12-26T22:58:17Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ebadb7d5fc522150603669ea98264147","kubernetes.io/config.mirror":"ebadb7d5fc522150603669ea98264147","kubernetes.io/config.seen":"2023-12-26T22:58:16.785831210Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I1226 23:21:05.881022   14940 request.go:629] Waited for 181.139ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:05.881179   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:05.881179   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:05.881179   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:05.881179   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:05.885803   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:05.885803   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:05.886360   14940 round_trippers.go:580]     Audit-Id: b4589e11-d1ce-4916-8e27-355a74a2a66a
	I1226 23:21:05.886360   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:05.886360   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:05.886360   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:05.886360   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:05.886360   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:05 GMT
	I1226 23:21:05.886571   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:05.887050   14940 pod_ready.go:97] node "multinode-455300" hosting pod "kube-scheduler-multinode-455300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:05.887050   14940 pod_ready.go:81] duration metric: took 389.2194ms waiting for pod "kube-scheduler-multinode-455300" in "kube-system" namespace to be "Ready" ...
	E1226 23:21:05.887161   14940 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-455300" hosting pod "kube-scheduler-multinode-455300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300" has status "Ready":"False"
	I1226 23:21:05.887161   14940 pod_ready.go:38] duration metric: took 1.5996091s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 23:21:05.887161   14940 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1226 23:21:05.901927   14940 command_runner.go:130] > -16
	I1226 23:21:05.902568   14940 ops.go:34] apiserver oom_adj: -16
	I1226 23:21:05.902655   14940 kubeadm.go:640] restartCluster took 16.5031542s
	I1226 23:21:05.902655   14940 kubeadm.go:406] StartCluster complete in 16.5768716s
	I1226 23:21:05.902722   14940 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 23:21:05.902781   14940 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:21:05.904208   14940 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 23:21:05.906209   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1226 23:21:05.906315   14940 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1226 23:21:05.910078   14940 out.go:177] * Enabled addons: 
	I1226 23:21:05.906904   14940 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:21:05.912508   14940 addons.go:508] enable addons completed in 6.1507ms: enabled=[]
	I1226 23:21:05.922440   14940 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:21:05.923103   14940 kapi.go:59] client config for multinode-455300: &rest.Config{Host:"https://172.21.182.57:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 23:21:05.924968   14940 cert_rotation.go:137] Starting client certificate rotation controller
	I1226 23:21:05.925304   14940 round_trippers.go:463] GET https://172.21.182.57:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1226 23:21:05.925304   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:05.925362   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:05.925362   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:05.940689   14940 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1226 23:21:05.941041   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:05.941041   14940 round_trippers.go:580]     Audit-Id: e506e7e0-7c30-4a4c-ac0d-436e4cd19261
	I1226 23:21:05.941041   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:05.941041   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:05.941109   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:05.941143   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:05.941143   14940 round_trippers.go:580]     Content-Length: 292
	I1226 23:21:05.941143   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:05 GMT
	I1226 23:21:05.941179   14940 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d040dd96-d104-4852-b930-38d82a1c4e71","resourceVersion":"1723","creationTimestamp":"2023-12-26T22:58:16Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1226 23:21:05.941361   14940 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-455300" context rescaled to 1 replicas
	I1226 23:21:05.941361   14940 start.go:223] Will wait 6m0s for node &{Name: IP:172.21.182.57 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1226 23:21:05.944904   14940 out.go:177] * Verifying Kubernetes components...
	I1226 23:21:05.960123   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 23:21:06.074331   14940 command_runner.go:130] > apiVersion: v1
	I1226 23:21:06.074331   14940 command_runner.go:130] > data:
	I1226 23:21:06.074613   14940 command_runner.go:130] >   Corefile: |
	I1226 23:21:06.074613   14940 command_runner.go:130] >     .:53 {
	I1226 23:21:06.074613   14940 command_runner.go:130] >         log
	I1226 23:21:06.074613   14940 command_runner.go:130] >         errors
	I1226 23:21:06.074613   14940 command_runner.go:130] >         health {
	I1226 23:21:06.074613   14940 command_runner.go:130] >            lameduck 5s
	I1226 23:21:06.074613   14940 command_runner.go:130] >         }
	I1226 23:21:06.074613   14940 command_runner.go:130] >         ready
	I1226 23:21:06.074613   14940 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1226 23:21:06.074698   14940 command_runner.go:130] >            pods insecure
	I1226 23:21:06.074698   14940 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1226 23:21:06.074698   14940 command_runner.go:130] >            ttl 30
	I1226 23:21:06.074698   14940 command_runner.go:130] >         }
	I1226 23:21:06.074698   14940 command_runner.go:130] >         prometheus :9153
	I1226 23:21:06.074698   14940 command_runner.go:130] >         hosts {
	I1226 23:21:06.074698   14940 command_runner.go:130] >            172.21.176.1 host.minikube.internal
	I1226 23:21:06.074698   14940 command_runner.go:130] >            fallthrough
	I1226 23:21:06.074767   14940 command_runner.go:130] >         }
	I1226 23:21:06.074767   14940 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1226 23:21:06.074767   14940 command_runner.go:130] >            max_concurrent 1000
	I1226 23:21:06.074767   14940 command_runner.go:130] >         }
	I1226 23:21:06.074767   14940 command_runner.go:130] >         cache 30
	I1226 23:21:06.074767   14940 command_runner.go:130] >         loop
	I1226 23:21:06.074767   14940 command_runner.go:130] >         reload
	I1226 23:21:06.074767   14940 command_runner.go:130] >         loadbalance
	I1226 23:21:06.074767   14940 command_runner.go:130] >     }
	I1226 23:21:06.074767   14940 command_runner.go:130] > kind: ConfigMap
	I1226 23:21:06.074767   14940 command_runner.go:130] > metadata:
	I1226 23:21:06.074767   14940 command_runner.go:130] >   creationTimestamp: "2023-12-26T22:58:16Z"
	I1226 23:21:06.074767   14940 command_runner.go:130] >   name: coredns
	I1226 23:21:06.074767   14940 command_runner.go:130] >   namespace: kube-system
	I1226 23:21:06.074767   14940 command_runner.go:130] >   resourceVersion: "401"
	I1226 23:21:06.074767   14940 command_runner.go:130] >   uid: d1f0a471-f150-4768-9d56-de6f75812b72
	I1226 23:21:06.078528   14940 node_ready.go:35] waiting up to 6m0s for node "multinode-455300" to be "Ready" ...
	I1226 23:21:06.078679   14940 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1226 23:21:06.086559   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:06.086559   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:06.086634   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:06.086634   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:06.091355   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:06.091529   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:06.091529   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:06.091529   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:06.091529   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:06 GMT
	I1226 23:21:06.091593   14940 round_trippers.go:580]     Audit-Id: 4e7b6543-be36-41f1-84b2-336f6eaa0c5e
	I1226 23:21:06.091593   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:06.091593   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:06.091951   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:06.583599   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:06.583599   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:06.583599   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:06.583599   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:06.589308   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:21:06.589308   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:06.589308   14940 round_trippers.go:580]     Audit-Id: 29b1d0ed-a056-451a-8910-bc172f7cd031
	I1226 23:21:06.589754   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:06.589754   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:06.589754   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:06.589813   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:06.589813   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:06 GMT
	I1226 23:21:06.590225   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:07.089347   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:07.089347   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:07.089347   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:07.089347   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:07.093984   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:07.093984   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:07.093984   14940 round_trippers.go:580]     Audit-Id: 322bc397-dc9b-4a17-81da-ab8b96a424f4
	I1226 23:21:07.093984   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:07.093984   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:07.093984   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:07.093984   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:07.093984   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:07 GMT
	I1226 23:21:07.095535   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:07.586319   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:07.586319   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:07.586319   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:07.586319   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:07.591318   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:07.592494   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:07.592494   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:07.592494   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:07.592494   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:07.592494   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:07.592494   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:07 GMT
	I1226 23:21:07.592494   14940 round_trippers.go:580]     Audit-Id: 5b68ab17-c513-433c-a48e-f95ee97e581d
	I1226 23:21:07.592494   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:08.090415   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:08.090415   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:08.090415   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:08.090415   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:08.095036   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:08.095036   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:08.095036   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:08.095036   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:08.095036   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:08.095036   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:08.095036   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:08 GMT
	I1226 23:21:08.095265   14940 round_trippers.go:580]     Audit-Id: cdd9a33c-2c99-45ad-b0fc-618d34699838
	I1226 23:21:08.095602   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:08.095664   14940 node_ready.go:58] node "multinode-455300" has status "Ready":"False"
	I1226 23:21:08.593915   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:08.593915   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:08.593915   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:08.593915   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:08.597517   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:08.597517   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:08.597517   14940 round_trippers.go:580]     Audit-Id: f15ce175-a97a-4089-8a94-ad8f621481c1
	I1226 23:21:08.597517   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:08.597517   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:08.597517   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:08.597517   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:08.597517   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:08 GMT
	I1226 23:21:08.597517   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:09.084711   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:09.084711   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:09.084711   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:09.084711   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:09.089317   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:09.089317   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:09.089317   14940 round_trippers.go:580]     Audit-Id: f716392b-3a8f-4385-b8d3-2b83cb0facae
	I1226 23:21:09.089733   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:09.089733   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:09.089733   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:09.089796   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:09.089796   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:09 GMT
	I1226 23:21:09.089796   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:09.588518   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:09.588518   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:09.588596   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:09.588596   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:09.593477   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:09.593477   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:09.593599   14940 round_trippers.go:580]     Audit-Id: 4ba6c07b-37db-48a3-ae1e-34178f0bfecf
	I1226 23:21:09.593599   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:09.593599   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:09.593599   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:09.593599   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:09.593599   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:09 GMT
	I1226 23:21:09.593756   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:10.090170   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:10.090238   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:10.090238   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:10.090367   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:10.094081   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:10.094081   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:10.094081   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:10.094081   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:10.094081   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:10 GMT
	I1226 23:21:10.094081   14940 round_trippers.go:580]     Audit-Id: 511fb29e-2052-48ea-b880-2f503b6c62e4
	I1226 23:21:10.094081   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:10.094081   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:10.095306   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:10.095879   14940 node_ready.go:58] node "multinode-455300" has status "Ready":"False"
	I1226 23:21:10.591707   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:10.591825   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:10.591825   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:10.591825   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:10.596204   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:10.596204   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:10.596204   14940 round_trippers.go:580]     Audit-Id: 8a53d031-0410-4ecd-b494-142c0cdd03ee
	I1226 23:21:10.596204   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:10.596204   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:10.596204   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:10.596204   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:10.596204   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:10 GMT
	I1226 23:21:10.596764   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:11.092170   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:11.092294   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:11.092294   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:11.092294   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:11.099716   14940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 23:21:11.099716   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:11.099716   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:11.099716   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:11.099716   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:11 GMT
	I1226 23:21:11.099716   14940 round_trippers.go:580]     Audit-Id: a8ee693b-18e6-4ed6-aa10-81db026f542c
	I1226 23:21:11.099716   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:11.099716   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:11.101361   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:11.593979   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:11.593979   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:11.593979   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:11.593979   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:11.599035   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:11.599098   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:11.599098   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:11.599098   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:11.599098   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:11.599098   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:11 GMT
	I1226 23:21:11.599098   14940 round_trippers.go:580]     Audit-Id: 4bdb4fbc-7a79-4e68-81b4-2899c1974fcd
	I1226 23:21:11.599098   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:11.599098   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:12.079474   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:12.079572   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:12.079572   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:12.079572   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:12.082984   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:12.082984   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:12.083528   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:12.083528   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:12.083528   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:12.083582   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:12 GMT
	I1226 23:21:12.083582   14940 round_trippers.go:580]     Audit-Id: 143ef602-5804-48f2-88bf-27e705422d9a
	I1226 23:21:12.083582   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:12.083582   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1703","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I1226 23:21:12.582393   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:12.582393   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:12.582393   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:12.582393   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:12.669384   14940 round_trippers.go:574] Response Status: 200 OK in 86 milliseconds
	I1226 23:21:12.669384   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:12.669384   14940 round_trippers.go:580]     Audit-Id: edc82952-7c8e-4222-9877-223c8b2dc5e5
	I1226 23:21:12.669384   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:12.669384   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:12.670109   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:12.670109   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:12.670109   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:12 GMT
	I1226 23:21:12.678397   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1813","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I1226 23:21:12.679231   14940 node_ready.go:58] node "multinode-455300" has status "Ready":"False"
	I1226 23:21:13.090046   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:13.090046   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:13.090141   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:13.090141   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:13.093490   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:13.093490   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:13.093490   14940 round_trippers.go:580]     Audit-Id: 99222aea-dbfc-40ec-8a31-fad884b191f8
	I1226 23:21:13.093490   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:13.094463   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:13.094463   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:13.094511   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:13.094511   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:13 GMT
	I1226 23:21:13.095254   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:13.095793   14940 node_ready.go:49] node "multinode-455300" has status "Ready":"True"
	I1226 23:21:13.095793   14940 node_ready.go:38] duration metric: took 7.0172664s waiting for node "multinode-455300" to be "Ready" ...
	I1226 23:21:13.095793   14940 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 23:21:13.095793   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods
	I1226 23:21:13.095793   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:13.095793   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:13.095793   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:13.102410   14940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 23:21:13.102410   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:13.102410   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:13 GMT
	I1226 23:21:13.102410   14940 round_trippers.go:580]     Audit-Id: f776e3dc-84a6-4b14-9072-3a4672978898
	I1226 23:21:13.102410   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:13.102410   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:13.102410   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:13.103021   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:13.105564   14940 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1842"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82829 chars]
	I1226 23:21:13.111959   14940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:13.111959   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:13.111959   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:13.111959   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:13.111959   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:13.117181   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:21:13.117181   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:13.117181   14940 round_trippers.go:580]     Audit-Id: 140e1818-106a-4c20-9a24-3ada8f4c08da
	I1226 23:21:13.117181   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:13.117181   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:13.117181   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:13.117181   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:13.117181   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:13 GMT
	I1226 23:21:13.117181   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:13.118404   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:13.118404   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:13.118404   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:13.118404   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:13.121852   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:13.121852   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:13.121962   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:13.121962   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:13.121962   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:13 GMT
	I1226 23:21:13.121962   14940 round_trippers.go:580]     Audit-Id: 15a79763-e89e-4c7f-b4aa-7c227a2ddb98
	I1226 23:21:13.121962   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:13.121962   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:13.122036   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:13.625884   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:13.625884   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:13.625884   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:13.625884   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:13.629941   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:13.629941   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:13.629941   14940 round_trippers.go:580]     Audit-Id: 04c076f0-250b-417a-8af6-409b73c2e5d1
	I1226 23:21:13.629941   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:13.629941   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:13.629941   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:13.629941   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:13.629941   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:13 GMT
	I1226 23:21:13.629941   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:13.631502   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:13.631555   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:13.631591   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:13.631620   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:13.634591   14940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:21:13.634863   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:13.634863   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:13.634863   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:13.634863   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:13 GMT
	I1226 23:21:13.634863   14940 round_trippers.go:580]     Audit-Id: b687506b-0cb6-4a94-bcb2-7522a0bcdf22
	I1226 23:21:13.634863   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:13.634863   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:13.634863   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:14.113331   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:14.113450   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:14.113450   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:14.113450   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:14.118856   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:21:14.119359   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:14.119359   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:14.119359   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:14 GMT
	I1226 23:21:14.119359   14940 round_trippers.go:580]     Audit-Id: bebffbf2-f9b8-478e-a58b-e9afc0d64b83
	I1226 23:21:14.119359   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:14.119359   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:14.119359   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:14.119359   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:14.120424   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:14.120424   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:14.120424   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:14.120424   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:14.124581   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:14.124581   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:14.124581   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:14.124733   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:14.124733   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:14.124733   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:14 GMT
	I1226 23:21:14.124733   14940 round_trippers.go:580]     Audit-Id: b27154e7-93d4-4696-bf13-c3a6a0ee9af5
	I1226 23:21:14.124733   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:14.125085   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:14.628006   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:14.628006   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:14.628006   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:14.628006   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:14.634892   14940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 23:21:14.634892   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:14.634892   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:14.634892   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:14.634892   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:14 GMT
	I1226 23:21:14.634892   14940 round_trippers.go:580]     Audit-Id: 19132b50-b5b1-4598-ab83-13bdb6531726
	I1226 23:21:14.634892   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:14.634892   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:14.635633   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:14.636250   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:14.636365   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:14.636365   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:14.636448   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:14.639858   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:14.639858   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:14.639858   14940 round_trippers.go:580]     Audit-Id: 4feeec5c-bfde-4880-bbb3-beb504b4e92e
	I1226 23:21:14.639858   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:14.639858   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:14.639858   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:14.639858   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:14.639858   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:14 GMT
	I1226 23:21:14.640338   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:15.125755   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:15.125872   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:15.125872   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:15.125872   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:15.130292   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:15.130525   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:15.130525   14940 round_trippers.go:580]     Audit-Id: f711eb63-786d-41fd-9469-5ba682053a59
	I1226 23:21:15.130525   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:15.130525   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:15.130618   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:15.130618   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:15.130618   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:15 GMT
	I1226 23:21:15.131020   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:15.131800   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:15.131800   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:15.131800   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:15.131800   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:15.135175   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:15.135175   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:15.135175   14940 round_trippers.go:580]     Audit-Id: 7b2a82a1-853d-4acf-8e2e-867a3b8179db
	I1226 23:21:15.135175   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:15.135175   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:15.135175   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:15.135175   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:15.135175   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:15 GMT
	I1226 23:21:15.136000   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:15.136448   14940 pod_ready.go:102] pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace has status "Ready":"False"
	I1226 23:21:15.627227   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:15.627312   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:15.627312   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:15.627312   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:15.634357   14940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 23:21:15.634357   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:15.634357   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:15.634357   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:15.634357   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:15.634357   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:15 GMT
	I1226 23:21:15.634357   14940 round_trippers.go:580]     Audit-Id: 36bef6db-9e7a-4eb7-a698-1f9a76699551
	I1226 23:21:15.634357   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:15.634906   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:15.635183   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:15.635183   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:15.635183   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:15.635183   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:15.638833   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:15.638833   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:15.638833   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:15 GMT
	I1226 23:21:15.638833   14940 round_trippers.go:580]     Audit-Id: 16f3834b-7673-4e15-a342-97c762c29630
	I1226 23:21:15.638833   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:15.638833   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:15.639753   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:15.639753   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:15.639826   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:16.125877   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:16.125993   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:16.125993   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:16.126081   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:16.130483   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:16.130483   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:16.130483   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:16.130483   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:16.130483   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:16 GMT
	I1226 23:21:16.130483   14940 round_trippers.go:580]     Audit-Id: 8e6e104a-3182-4da3-a91c-74a0d7ffed6a
	I1226 23:21:16.130483   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:16.130483   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:16.130978   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:16.131954   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:16.132031   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:16.132031   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:16.132031   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:16.135422   14940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:21:16.135469   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:16.135469   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:16.135469   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:16 GMT
	I1226 23:21:16.135469   14940 round_trippers.go:580]     Audit-Id: 61746991-24ad-4d14-aecf-a0072794d2c6
	I1226 23:21:16.135469   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:16.135469   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:16.135469   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:16.135568   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:16.626726   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:16.626807   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:16.626872   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:16.626872   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:16.636320   14940 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1226 23:21:16.636320   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:16.636320   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:16 GMT
	I1226 23:21:16.636320   14940 round_trippers.go:580]     Audit-Id: a859e65d-4e18-4d24-a560-6227f0f7f5cd
	I1226 23:21:16.636320   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:16.636320   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:16.636320   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:16.636494   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:16.636926   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:16.637686   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:16.637747   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:16.637747   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:16.637747   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:16.641028   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:16.641028   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:16.641028   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:16 GMT
	I1226 23:21:16.641028   14940 round_trippers.go:580]     Audit-Id: 8e4b7dbe-5955-4026-ad2d-7ed580c5ef9a
	I1226 23:21:16.641028   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:16.641028   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:16.641991   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:16.641991   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:16.642441   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:17.115574   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:17.115704   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:17.115704   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:17.115759   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:17.119166   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:17.119166   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:17.119166   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:17 GMT
	I1226 23:21:17.119166   14940 round_trippers.go:580]     Audit-Id: 22c614e7-b5f2-4f06-85bf-43b89b49e89d
	I1226 23:21:17.120095   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:17.120095   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:17.120095   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:17.120095   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:17.120429   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:17.121120   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:17.121120   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:17.121120   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:17.121120   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:17.129378   14940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1226 23:21:17.129378   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:17.129378   14940 round_trippers.go:580]     Audit-Id: b9d461a6-7d3e-4dd2-8824-405200409c9f
	I1226 23:21:17.129378   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:17.129378   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:17.129378   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:17.129378   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:17.129378   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:17 GMT
	I1226 23:21:17.129378   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:17.621976   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:17.621976   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:17.621976   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:17.621976   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:17.627660   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:21:17.627711   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:17.627711   14940 round_trippers.go:580]     Audit-Id: ed024c82-4969-47e9-98f7-33efbb01712e
	I1226 23:21:17.627711   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:17.627711   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:17.627791   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:17.627791   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:17.627791   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:17 GMT
	I1226 23:21:17.628016   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:17.628420   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:17.628420   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:17.628420   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:17.628420   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:17.632005   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:17.632005   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:17.632005   14940 round_trippers.go:580]     Audit-Id: 66ca5c11-0ce1-411b-9910-9e657f544e40
	I1226 23:21:17.632005   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:17.632005   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:17.632005   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:17.632005   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:17.632005   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:17 GMT
	I1226 23:21:17.632005   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1833","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5357 chars]
	I1226 23:21:17.632985   14940 pod_ready.go:102] pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace has status "Ready":"False"
	I1226 23:21:18.115446   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:18.115446   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:18.115446   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:18.115446   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:18.118143   14940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:21:18.119184   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:18.119184   14940 round_trippers.go:580]     Audit-Id: 4c2bcdd1-7927-402d-8b4d-cc7f032e456a
	I1226 23:21:18.119184   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:18.119184   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:18.119269   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:18.119269   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:18.119269   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:18 GMT
	I1226 23:21:18.119547   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:18.119857   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:18.119857   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:18.119857   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:18.119857   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:18.123546   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:18.123546   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:18.124384   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:18 GMT
	I1226 23:21:18.124384   14940 round_trippers.go:580]     Audit-Id: 4c87edfa-94fa-49cc-95bd-f378ff7f3ac3
	I1226 23:21:18.124384   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:18.124384   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:18.124384   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:18.124384   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:18.124747   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:21:18.616690   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:18.616690   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:18.616690   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:18.616690   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:18.623729   14940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 23:21:18.623729   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:18.623729   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:18.624582   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:18.624582   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:18 GMT
	I1226 23:21:18.624582   14940 round_trippers.go:580]     Audit-Id: 9b6dc645-422d-454b-959b-4b5af5c1510b
	I1226 23:21:18.624582   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:18.624582   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:18.624717   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1713","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1226 23:21:18.625698   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:18.625698   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:18.625698   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:18.625698   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:18.628196   14940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:21:18.628196   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:18.628196   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:18 GMT
	I1226 23:21:18.628196   14940 round_trippers.go:580]     Audit-Id: 948e5520-4b13-4c71-991b-125c76d52409
	I1226 23:21:18.628196   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:18.628196   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:18.628196   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:18.628196   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:18.628196   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:21:19.121073   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:21:19.121073   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.121170   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.121170   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.126515   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:21:19.126872   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.126872   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.126872   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.126872   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.126872   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.126872   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.126872   14940 round_trippers.go:580]     Audit-Id: 3d17edbd-97d5-417f-9adb-b27a4e02f8a2
	I1226 23:21:19.127308   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1863","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I1226 23:21:19.128098   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:19.128098   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.128170   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.128170   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.134753   14940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 23:21:19.134846   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.134846   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.134846   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.134846   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.134846   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.134846   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.134911   14940 round_trippers.go:580]     Audit-Id: 5ff636e5-1579-41de-a3c4-a817fec59187
	I1226 23:21:19.134911   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:21:19.135542   14940 pod_ready.go:92] pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:19.135542   14940 pod_ready.go:81] duration metric: took 6.0235835s waiting for pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.135542   14940 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.135542   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-455300
	I1226 23:21:19.135542   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.135542   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.135542   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.138521   14940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:21:19.138521   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.138521   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.138521   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.138521   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.138521   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.138521   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.138521   14940 round_trippers.go:580]     Audit-Id: d996b836-b3cd-4cb5-b240-2c4f3f199630
	I1226 23:21:19.139582   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-455300","namespace":"kube-system","uid":"cfd4d580-b2a8-4ff1-8d3c-c6b50f9bf86e","resourceVersion":"1834","creationTimestamp":"2023-12-26T23:21:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.21.182.57:2379","kubernetes.io/config.hash":"d4e365efddd1bd5d58716ef3ab8705bc","kubernetes.io/config.mirror":"d4e365efddd1bd5d58716ef3ab8705bc","kubernetes.io/config.seen":"2023-12-26T23:20:52.614240428Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:21:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I1226 23:21:19.139582   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:19.139582   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.139582   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.139582   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.143808   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:19.143808   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.144146   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.144146   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.144146   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.144146   14940 round_trippers.go:580]     Audit-Id: 9fb21395-67bb-46a7-b969-31e6d9bdc713
	I1226 23:21:19.144146   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.144146   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.144507   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:21:19.144895   14940 pod_ready.go:92] pod "etcd-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:19.144961   14940 pod_ready.go:81] duration metric: took 9.3534ms waiting for pod "etcd-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.144961   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.144961   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-455300
	I1226 23:21:19.144961   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.144961   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.144961   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.147548   14940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:21:19.147548   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.147548   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.147548   14940 round_trippers.go:580]     Audit-Id: 10708729-c7e5-44ee-9b82-d6357960d787
	I1226 23:21:19.147548   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.147548   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.147548   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.147548   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.147548   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-455300","namespace":"kube-system","uid":"bbe5516b-f745-4a20-8df3-3cd3ac15d7f6","resourceVersion":"1836","creationTimestamp":"2023-12-26T23:21:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.21.182.57:8443","kubernetes.io/config.hash":"3c3021eb66c7ec14c35bcec06843a329","kubernetes.io/config.mirror":"3c3021eb66c7ec14c35bcec06843a329","kubernetes.io/config.seen":"2023-12-26T23:20:52.614245928Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:21:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I1226 23:21:19.148946   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:19.148946   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.148946   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.148946   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.152204   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:19.153176   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.153176   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.153176   14940 round_trippers.go:580]     Audit-Id: 3b603720-1635-435a-a1c5-ecbd31ad5b11
	I1226 23:21:19.153176   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.153235   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.153235   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.153235   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.153467   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:21:19.153684   14940 pod_ready.go:92] pod "kube-apiserver-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:19.153684   14940 pod_ready.go:81] duration metric: took 8.7229ms waiting for pod "kube-apiserver-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.153684   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.153684   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-455300
	I1226 23:21:19.153684   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.153684   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.153684   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.158367   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:19.158367   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.158367   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.158367   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.158367   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.158367   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.158367   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.158367   14940 round_trippers.go:580]     Audit-Id: af9ebf6c-96b7-4798-acd6-7cbc81a1a34f
	I1226 23:21:19.158367   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-455300","namespace":"kube-system","uid":"fdaf236b-e792-4278-908c-34b337b97beb","resourceVersion":"1844","creationTimestamp":"2023-12-26T22:58:13Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"390e7b7338e932006306396347e13bcf","kubernetes.io/config.mirror":"390e7b7338e932006306396347e13bcf","kubernetes.io/config.seen":"2023-12-26T22:58:06.456140564Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I1226 23:21:19.159878   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:19.159878   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.159878   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.159878   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.163473   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:19.163473   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.163473   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.163473   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.163473   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.163473   14940 round_trippers.go:580]     Audit-Id: c1de5a6c-0c5c-426d-a579-793f33575fea
	I1226 23:21:19.163473   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.163613   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.163895   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:21:19.164395   14940 pod_ready.go:92] pod "kube-controller-manager-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:19.164482   14940 pod_ready.go:81] duration metric: took 10.7779ms waiting for pod "kube-controller-manager-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.164482   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2pfcl" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.164544   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pfcl
	I1226 23:21:19.164641   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.164641   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.164717   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.168470   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:19.169451   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.169451   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.169521   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.169521   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.169521   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.169521   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.169521   14940 round_trippers.go:580]     Audit-Id: a075c014-6fe1-489a-b4e7-c4acaaf3ae97
	I1226 23:21:19.169712   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2pfcl","generateName":"kube-proxy-","namespace":"kube-system","uid":"61b5d2fb-802c-4b84-b7fa-7a7e9e024028","resourceVersion":"1631","creationTimestamp":"2023-12-26T23:06:10Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:06:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I1226 23:21:19.170585   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m03
	I1226 23:21:19.170585   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.170585   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.170585   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.173913   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:19.174032   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.174032   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.174032   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.174032   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.174105   14940 round_trippers.go:580]     Audit-Id: 47b46aeb-048c-437f-8b98-514bf85dc611
	I1226 23:21:19.174105   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.174105   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.175196   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m03","uid":"ef364efe-5dc7-4fb4-bc7c-76a3eaa41ba4","resourceVersion":"1649","creationTimestamp":"2023-12-26T23:16:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_16_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:16:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3636 chars]
	I1226 23:21:19.175196   14940 pod_ready.go:92] pod "kube-proxy-2pfcl" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:19.175733   14940 pod_ready.go:81] duration metric: took 11.2513ms waiting for pod "kube-proxy-2pfcl" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.175793   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqlf8" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.325839   14940 request.go:629] Waited for 149.5919ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqlf8
	I1226 23:21:19.325959   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqlf8
	I1226 23:21:19.325959   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.325959   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.325959   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.329564   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:19.330605   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.330624   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.330624   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.330624   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.330624   14940 round_trippers.go:580]     Audit-Id: eed0cb80-790c-4792-8b57-2ac8fe578101
	I1226 23:21:19.330706   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.330706   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.330706   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bqlf8","generateName":"kube-proxy-","namespace":"kube-system","uid":"1caff24c-909f-42a9-a4b8-d9c8c1ec8828","resourceVersion":"635","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I1226 23:21:19.530123   14940 request.go:629] Waited for 198.2616ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:21:19.530316   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:21:19.530316   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.530316   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.530316   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.533764   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:19.533764   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.533764   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.533764   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.533764   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.533764   14940 round_trippers.go:580]     Audit-Id: f7d5ba08-54d3-413f-8320-5827ef4a6f89
	I1226 23:21:19.533764   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.533764   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.534846   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d","resourceVersion":"1620","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_16_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3819 chars]
	I1226 23:21:19.535462   14940 pod_ready.go:92] pod "kube-proxy-bqlf8" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:19.535462   14940 pod_ready.go:81] duration metric: took 359.6695ms waiting for pod "kube-proxy-bqlf8" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.535462   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hzcqb" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.733138   14940 request.go:629] Waited for 197.3425ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzcqb
	I1226 23:21:19.733223   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzcqb
	I1226 23:21:19.733223   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.733406   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.733406   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.738168   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:21:19.738168   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.738168   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.738168   14940 round_trippers.go:580]     Audit-Id: 64d66e5c-98cf-4bf7-9eaa-bf91661f49ea
	I1226 23:21:19.738168   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.738168   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.738168   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.738168   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.738544   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hzcqb","generateName":"kube-proxy-","namespace":"kube-system","uid":"0027fd42-fa64-4d1d-acc8-36e7b41e4838","resourceVersion":"1829","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I1226 23:21:19.922460   14940 request.go:629] Waited for 183.3009ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:19.922537   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:19.922537   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:19.922537   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:19.922537   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:19.927013   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:19.927013   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:19.927013   14940 round_trippers.go:580]     Audit-Id: 86aa1e31-c14a-408d-91cc-17453863c8b0
	I1226 23:21:19.927013   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:19.927695   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:19.927695   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:19.927695   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:19.927695   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:19 GMT
	I1226 23:21:19.929418   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:21:19.929953   14940 pod_ready.go:92] pod "kube-proxy-hzcqb" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:19.929953   14940 pod_ready.go:81] duration metric: took 394.4913ms waiting for pod "kube-proxy-hzcqb" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:19.929953   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:20.126595   14940 request.go:629] Waited for 196.4678ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-455300
	I1226 23:21:20.127042   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-455300
	I1226 23:21:20.127042   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:20.127042   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:20.127186   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:20.131631   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:20.131631   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:20.131631   14940 round_trippers.go:580]     Audit-Id: a29fb9c6-d461-4e1a-a02c-9e4c35cc878d
	I1226 23:21:20.131631   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:20.131631   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:20.131631   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:20.131631   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:20.131631   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:20 GMT
	I1226 23:21:20.132523   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-455300","namespace":"kube-system","uid":"58252b0c-41ef-43ab-b2e8-4bd2a1b21cb1","resourceVersion":"1839","creationTimestamp":"2023-12-26T22:58:17Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ebadb7d5fc522150603669ea98264147","kubernetes.io/config.mirror":"ebadb7d5fc522150603669ea98264147","kubernetes.io/config.seen":"2023-12-26T22:58:16.785831210Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I1226 23:21:20.331638   14940 request.go:629] Waited for 197.7397ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:20.331780   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:21:20.332006   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:20.332006   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:20.332006   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:20.336629   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:20.336629   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:20.336629   14940 round_trippers.go:580]     Audit-Id: 2d4cdfb6-4b90-4ccd-9dac-3a6de5a86383
	I1226 23:21:20.336629   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:20.336629   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:20.336629   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:20.336629   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:20.336629   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:20 GMT
	I1226 23:21:20.337450   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:21:20.337987   14940 pod_ready.go:92] pod "kube-scheduler-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:21:20.337987   14940 pod_ready.go:81] duration metric: took 408.0337ms waiting for pod "kube-scheduler-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:21:20.337987   14940 pod_ready.go:38] duration metric: took 7.2421951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 23:21:20.338126   14940 api_server.go:52] waiting for apiserver process to appear ...
	I1226 23:21:20.352056   14940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:21:20.375522   14940 command_runner.go:130] > 1851
	I1226 23:21:20.375652   14940 api_server.go:72] duration metric: took 14.4342941s to wait for apiserver process to appear ...
	I1226 23:21:20.375652   14940 api_server.go:88] waiting for apiserver healthz status ...
	I1226 23:21:20.375717   14940 api_server.go:253] Checking apiserver healthz at https://172.21.182.57:8443/healthz ...
	I1226 23:21:20.385153   14940 api_server.go:279] https://172.21.182.57:8443/healthz returned 200:
	ok
	I1226 23:21:20.386338   14940 round_trippers.go:463] GET https://172.21.182.57:8443/version
	I1226 23:21:20.386338   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:20.386338   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:20.386404   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:20.388029   14940 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 23:21:20.388029   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:20.388397   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:20.388397   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:20.388397   14940 round_trippers.go:580]     Content-Length: 264
	I1226 23:21:20.388397   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:20 GMT
	I1226 23:21:20.388397   14940 round_trippers.go:580]     Audit-Id: 31aa4770-6938-4f49-87d9-530811e84a58
	I1226 23:21:20.388397   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:20.388397   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:20.388506   14940 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1226 23:21:20.388599   14940 api_server.go:141] control plane version: v1.28.4
	I1226 23:21:20.388599   14940 api_server.go:131] duration metric: took 12.9468ms to wait for apiserver health ...
	I1226 23:21:20.388599   14940 system_pods.go:43] waiting for kube-system pods to appear ...
	I1226 23:21:20.536020   14940 request.go:629] Waited for 147.1195ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods
	I1226 23:21:20.536020   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods
	I1226 23:21:20.536232   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:20.536232   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:20.536286   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:20.546869   14940 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1226 23:21:20.546869   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:20.546869   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:20 GMT
	I1226 23:21:20.546869   14940 round_trippers.go:580]     Audit-Id: 716bc06f-3023-4990-aa24-228f200b4431
	I1226 23:21:20.546869   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:20.546869   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:20.546869   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:20.546869   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:20.549541   14940 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1869"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1863","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82515 chars]
	I1226 23:21:20.553624   14940 system_pods.go:59] 12 kube-system pods found
	I1226 23:21:20.553698   14940 system_pods.go:61] "coredns-5dd5756b68-fj9bd" [fbc5229e-2af2-4e17-b23c-ebf836a42aa2] Running
	I1226 23:21:20.553698   14940 system_pods.go:61] "etcd-multinode-455300" [cfd4d580-b2a8-4ff1-8d3c-c6b50f9bf86e] Running
	I1226 23:21:20.553698   14940 system_pods.go:61] "kindnet-8jsvj" [376eb267-ce7d-4497-a85e-ff9224a25347] Running
	I1226 23:21:20.553698   14940 system_pods.go:61] "kindnet-zt55b" [43604859-483f-4e92-a16c-d3f30cb6e4f1] Running
	I1226 23:21:20.553698   14940 system_pods.go:61] "kindnet-zxd45" [686e296b-23ae-4a1e-bc14-2dea164b0c29] Running
	I1226 23:21:20.553698   14940 system_pods.go:61] "kube-apiserver-multinode-455300" [bbe5516b-f745-4a20-8df3-3cd3ac15d7f6] Running
	I1226 23:21:20.553698   14940 system_pods.go:61] "kube-controller-manager-multinode-455300" [fdaf236b-e792-4278-908c-34b337b97beb] Running
	I1226 23:21:20.553774   14940 system_pods.go:61] "kube-proxy-2pfcl" [61b5d2fb-802c-4b84-b7fa-7a7e9e024028] Running
	I1226 23:21:20.553774   14940 system_pods.go:61] "kube-proxy-bqlf8" [1caff24c-909f-42a9-a4b8-d9c8c1ec8828] Running
	I1226 23:21:20.553774   14940 system_pods.go:61] "kube-proxy-hzcqb" [0027fd42-fa64-4d1d-acc8-36e7b41e4838] Running
	I1226 23:21:20.553774   14940 system_pods.go:61] "kube-scheduler-multinode-455300" [58252b0c-41ef-43ab-b2e8-4bd2a1b21cb1] Running
	I1226 23:21:20.553774   14940 system_pods.go:61] "storage-provisioner" [e274f19d-1940-400d-b887-aaf390e64fdd] Running
	I1226 23:21:20.553774   14940 system_pods.go:74] duration metric: took 165.1747ms to wait for pod list to return data ...
	I1226 23:21:20.553774   14940 default_sa.go:34] waiting for default service account to be created ...
	I1226 23:21:20.737346   14940 request.go:629] Waited for 183.354ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/default/serviceaccounts
	I1226 23:21:20.737346   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/default/serviceaccounts
	I1226 23:21:20.737346   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:20.737346   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:20.737346   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:20.741936   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:21:20.742650   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:20.742650   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:20.742650   14940 round_trippers.go:580]     Content-Length: 262
	I1226 23:21:20.742650   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:20 GMT
	I1226 23:21:20.742650   14940 round_trippers.go:580]     Audit-Id: c858c45b-5f76-4d05-98a5-2322b7682e59
	I1226 23:21:20.742650   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:20.742727   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:20.742745   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:20.742745   14940 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1869"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"52815640-9603-4e59-b38b-e19ec6f4b307","resourceVersion":"349","creationTimestamp":"2023-12-26T22:58:29Z"}}]}
	I1226 23:21:20.743161   14940 default_sa.go:45] found service account: "default"
	I1226 23:21:20.743161   14940 default_sa.go:55] duration metric: took 189.3873ms for default service account to be created ...
	I1226 23:21:20.743240   14940 system_pods.go:116] waiting for k8s-apps to be running ...
	I1226 23:21:20.923564   14940 request.go:629] Waited for 180.2149ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods
	I1226 23:21:20.923564   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods
	I1226 23:21:20.923564   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:20.923564   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:20.923564   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:20.934050   14940 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1226 23:21:20.934050   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:20.934050   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:20.934050   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:20 GMT
	I1226 23:21:20.934050   14940 round_trippers.go:580]     Audit-Id: efc9f108-7827-4ab4-b998-a27e99ed68ad
	I1226 23:21:20.934836   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:20.934836   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:20.934836   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:20.937531   14940 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1869"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1863","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82515 chars]
	I1226 23:21:20.941584   14940 system_pods.go:86] 12 kube-system pods found
	I1226 23:21:20.941675   14940 system_pods.go:89] "coredns-5dd5756b68-fj9bd" [fbc5229e-2af2-4e17-b23c-ebf836a42aa2] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "etcd-multinode-455300" [cfd4d580-b2a8-4ff1-8d3c-c6b50f9bf86e] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kindnet-8jsvj" [376eb267-ce7d-4497-a85e-ff9224a25347] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kindnet-zt55b" [43604859-483f-4e92-a16c-d3f30cb6e4f1] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kindnet-zxd45" [686e296b-23ae-4a1e-bc14-2dea164b0c29] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kube-apiserver-multinode-455300" [bbe5516b-f745-4a20-8df3-3cd3ac15d7f6] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kube-controller-manager-multinode-455300" [fdaf236b-e792-4278-908c-34b337b97beb] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kube-proxy-2pfcl" [61b5d2fb-802c-4b84-b7fa-7a7e9e024028] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kube-proxy-bqlf8" [1caff24c-909f-42a9-a4b8-d9c8c1ec8828] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kube-proxy-hzcqb" [0027fd42-fa64-4d1d-acc8-36e7b41e4838] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "kube-scheduler-multinode-455300" [58252b0c-41ef-43ab-b2e8-4bd2a1b21cb1] Running
	I1226 23:21:20.941675   14940 system_pods.go:89] "storage-provisioner" [e274f19d-1940-400d-b887-aaf390e64fdd] Running
	I1226 23:21:20.941831   14940 system_pods.go:126] duration metric: took 198.5904ms to wait for k8s-apps to be running ...
	I1226 23:21:20.941831   14940 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 23:21:20.954560   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 23:21:20.976564   14940 system_svc.go:56] duration metric: took 33.7191ms WaitForService to wait for kubelet.
	I1226 23:21:20.976564   14940 kubeadm.go:581] duration metric: took 15.0352054s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 23:21:20.976564   14940 node_conditions.go:102] verifying NodePressure condition ...
	I1226 23:21:21.127064   14940 request.go:629] Waited for 150.3744ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes
	I1226 23:21:21.127288   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes
	I1226 23:21:21.127376   14940 round_trippers.go:469] Request Headers:
	I1226 23:21:21.127442   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:21:21.127486   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:21:21.134080   14940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 23:21:21.134080   14940 round_trippers.go:577] Response Headers:
	I1226 23:21:21.134080   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:21:21 GMT
	I1226 23:21:21.134080   14940 round_trippers.go:580]     Audit-Id: f3485c07-5450-4c04-bd6d-b43e51c0d330
	I1226 23:21:21.134080   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:21:21.134080   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:21:21.134080   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:21:21.134080   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:21:21.134601   14940 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1869"},"items":[{"metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14730 chars]
	I1226 23:21:21.135549   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:21:21.135549   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:21:21.135549   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:21:21.135549   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:21:21.135645   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:21:21.135645   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:21:21.135645   14940 node_conditions.go:105] duration metric: took 159.0819ms to run NodePressure ...
	I1226 23:21:21.135645   14940 start.go:228] waiting for startup goroutines ...
	I1226 23:21:21.135645   14940 start.go:233] waiting for cluster config update ...
	I1226 23:21:21.135645   14940 start.go:242] writing updated cluster config ...
	I1226 23:21:21.149762   14940 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:21:21.149854   14940 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 23:21:21.157485   14940 out.go:177] * Starting worker node multinode-455300-m02 in cluster multinode-455300
	I1226 23:21:21.161093   14940 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 23:21:21.161093   14940 cache.go:56] Caching tarball of preloaded images
	I1226 23:21:21.162151   14940 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1226 23:21:21.162151   14940 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1226 23:21:21.162151   14940 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 23:21:21.164302   14940 start.go:365] acquiring machines lock for multinode-455300-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1226 23:21:21.164302   14940 start.go:369] acquired machines lock for "multinode-455300-m02" in 0s
	I1226 23:21:21.165477   14940 start.go:96] Skipping create...Using existing machine configuration
	I1226 23:21:21.165477   14940 fix.go:54] fixHost starting: m02
	I1226 23:21:21.166273   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:21:23.308632   14940 main.go:141] libmachine: [stdout =====>] : Off
	
	I1226 23:21:23.308632   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:23.308632   14940 fix.go:102] recreateIfNeeded on multinode-455300-m02: state=Stopped err=<nil>
	W1226 23:21:23.308632   14940 fix.go:128] unexpected machine state, will restart: <nil>
	I1226 23:21:23.315224   14940 out.go:177] * Restarting existing hyperv VM for "multinode-455300-m02" ...
	I1226 23:21:23.318110   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-455300-m02
	I1226 23:21:26.483294   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:21:26.483294   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:26.483294   14940 main.go:141] libmachine: Waiting for host to start...
	I1226 23:21:26.483294   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:21:28.842509   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:21:28.842509   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:28.842509   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:21:31.464432   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:21:31.464470   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:32.465963   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:21:34.710514   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:21:34.710514   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:34.710626   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:21:37.308610   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:21:37.308661   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:38.309334   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:21:40.568709   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:21:40.568709   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:40.568709   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:21:43.159797   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:21:43.159797   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:44.174687   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:21:46.465398   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:21:46.465611   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:46.465611   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:21:49.091695   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:21:49.092056   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:50.094960   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:21:52.330822   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:21:52.331032   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:52.331115   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:21:54.992446   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:21:54.992446   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:54.995708   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:21:57.194393   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:21:57.194393   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:57.194478   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:21:59.802903   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:21:59.803109   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:21:59.803244   14940 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 23:21:59.806570   14940 machine.go:88] provisioning docker machine ...
	I1226 23:21:59.806651   14940 buildroot.go:166] provisioning hostname "multinode-455300-m02"
	I1226 23:21:59.806651   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:02.045823   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:02.045823   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:02.045823   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:04.635745   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:04.635745   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:04.640663   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:22:04.640663   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.151 22 <nil> <nil>}
	I1226 23:22:04.640663   14940 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-455300-m02 && echo "multinode-455300-m02" | sudo tee /etc/hostname
	I1226 23:22:04.806907   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-455300-m02
	
	I1226 23:22:04.806907   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:06.989474   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:06.989474   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:06.989474   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:09.627169   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:09.627169   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:09.632601   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:22:09.633279   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.151 22 <nil> <nil>}
	I1226 23:22:09.633279   14940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-455300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-455300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-455300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 23:22:09.787878   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 23:22:09.787878   14940 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1226 23:22:09.787878   14940 buildroot.go:174] setting up certificates
	I1226 23:22:09.787878   14940 provision.go:83] configureAuth start
	I1226 23:22:09.787878   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:11.990188   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:11.990188   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:11.990293   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:14.596056   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:14.596056   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:14.596056   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:16.788157   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:16.788213   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:16.788213   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:19.383501   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:19.383501   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:19.383763   14940 provision.go:138] copyHostCerts
	I1226 23:22:19.384047   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1226 23:22:19.384063   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1226 23:22:19.384063   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1226 23:22:19.384835   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1226 23:22:19.385836   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1226 23:22:19.385836   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1226 23:22:19.385836   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1226 23:22:19.386646   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1226 23:22:19.387536   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1226 23:22:19.387536   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1226 23:22:19.388080   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1226 23:22:19.388384   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I1226 23:22:19.389444   14940 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-455300-m02 san=[172.21.184.151 172.21.184.151 localhost 127.0.0.1 minikube multinode-455300-m02]
	I1226 23:22:19.537868   14940 provision.go:172] copyRemoteCerts
	I1226 23:22:19.552043   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 23:22:19.552043   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:21.750842   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:21.750842   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:21.750975   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:24.393903   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:24.394050   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:24.394344   14940 sshutil.go:53] new ssh client: &{IP:172.21.184.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\id_rsa Username:docker}
	I1226 23:22:24.503631   14940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9515884s)
	I1226 23:22:24.503754   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1226 23:22:24.504249   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 23:22:24.548141   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1226 23:22:24.548141   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I1226 23:22:24.588160   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1226 23:22:24.588425   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1226 23:22:24.630228   14940 provision.go:86] duration metric: configureAuth took 14.8422957s
	I1226 23:22:24.630228   14940 buildroot.go:189] setting minikube options for container-runtime
	I1226 23:22:24.630960   14940 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:22:24.631021   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:26.808762   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:26.809060   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:26.809060   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:29.384990   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:29.385166   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:29.391701   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:22:29.392391   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.151 22 <nil> <nil>}
	I1226 23:22:29.392391   14940 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1226 23:22:29.535406   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1226 23:22:29.535529   14940 buildroot.go:70] root file system type: tmpfs
	I1226 23:22:29.535812   14940 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1226 23:22:29.535933   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:31.722596   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:31.722698   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:31.722945   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:34.323757   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:34.323938   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:34.330303   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:22:34.330566   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.151 22 <nil> <nil>}
	I1226 23:22:34.331095   14940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.21.182.57"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1226 23:22:34.494547   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.21.182.57
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1226 23:22:34.495084   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:36.643040   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:36.643040   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:36.643160   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:39.264478   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:39.264577   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:39.270152   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:22:39.270926   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.151 22 <nil> <nil>}
	I1226 23:22:39.270926   14940 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1226 23:22:40.604407   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1226 23:22:40.604407   14940 machine.go:91] provisioned docker machine in 40.7978553s
	I1226 23:22:40.604407   14940 start.go:300] post-start starting for "multinode-455300-m02" (driver="hyperv")
	I1226 23:22:40.604407   14940 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 23:22:40.617787   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 23:22:40.617787   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:42.836676   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:42.836771   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:42.836771   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:45.445900   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:45.445900   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:45.446272   14940 sshutil.go:53] new ssh client: &{IP:172.21.184.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\id_rsa Username:docker}
	I1226 23:22:45.556413   14940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9385937s)
	I1226 23:22:45.571250   14940 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 23:22:45.575787   14940 command_runner.go:130] > NAME=Buildroot
	I1226 23:22:45.575787   14940 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I1226 23:22:45.575787   14940 command_runner.go:130] > ID=buildroot
	I1226 23:22:45.575787   14940 command_runner.go:130] > VERSION_ID=2021.02.12
	I1226 23:22:45.575787   14940 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1226 23:22:45.576806   14940 info.go:137] Remote host: Buildroot 2021.02.12
	I1226 23:22:45.576806   14940 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1226 23:22:45.576806   14940 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1226 23:22:45.578060   14940 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> 107282.pem in /etc/ssl/certs
	I1226 23:22:45.578060   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> /etc/ssl/certs/107282.pem
	I1226 23:22:45.592640   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 23:22:45.611194   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /etc/ssl/certs/107282.pem (1708 bytes)
	I1226 23:22:45.650556   14940 start.go:303] post-start completed in 5.0461547s
	I1226 23:22:45.650620   14940 fix.go:56] fixHost completed within 1m24.4851763s
	I1226 23:22:45.650682   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:47.847006   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:47.847220   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:47.847220   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:50.417927   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:50.418156   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:50.424338   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:22:50.425096   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.151 22 <nil> <nil>}
	I1226 23:22:50.425096   14940 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1226 23:22:50.565661   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703632970.562811137
	
	I1226 23:22:50.565716   14940 fix.go:206] guest clock: 1703632970.562811137
	I1226 23:22:50.565716   14940 fix.go:219] Guest: 2023-12-26 23:22:50.562811137 +0000 UTC Remote: 2023-12-26 23:22:45.6506208 +0000 UTC m=+232.474476101 (delta=4.912190337s)
	I1226 23:22:50.565716   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:52.762944   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:52.762944   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:52.763068   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:22:55.363815   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:22:55.363815   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:55.369424   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:22:55.370749   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.184.151 22 <nil> <nil>}
	I1226 23:22:55.370749   14940 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1703632970
	I1226 23:22:55.522425   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 26 23:22:50 UTC 2023
	
	I1226 23:22:55.522498   14940 fix.go:226] clock set: Tue Dec 26 23:22:50 UTC 2023
	 (err=<nil>)
	I1226 23:22:55.522498   14940 start.go:83] releasing machines lock for "multinode-455300-m02", held for 1m34.358231s
	I1226 23:22:55.522770   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:22:57.689168   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:22:57.689168   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:22:57.689168   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:23:00.297858   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:23:00.297858   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:00.302408   14940 out.go:177] * Found network options:
	I1226 23:23:00.306495   14940 out.go:177]   - NO_PROXY=172.21.182.57
	W1226 23:23:00.308552   14940 proxy.go:119] fail to check proxy env: Error ip not in block
	I1226 23:23:00.311816   14940 out.go:177]   - NO_PROXY=172.21.182.57
	W1226 23:23:00.314257   14940 proxy.go:119] fail to check proxy env: Error ip not in block
	W1226 23:23:00.316001   14940 proxy.go:119] fail to check proxy env: Error ip not in block
	I1226 23:23:00.318516   14940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 23:23:00.319048   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:23:00.330632   14940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 23:23:00.330632   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:23:02.554383   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:23:02.554574   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:02.554574   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:23:02.585119   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:23:02.585119   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:02.585119   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:23:05.204799   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:23:05.204799   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:05.204799   14940 sshutil.go:53] new ssh client: &{IP:172.21.184.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\id_rsa Username:docker}
	I1226 23:23:05.225400   14940 main.go:141] libmachine: [stdout =====>] : 172.21.184.151
	
	I1226 23:23:05.225400   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:05.225400   14940 sshutil.go:53] new ssh client: &{IP:172.21.184.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\id_rsa Username:docker}
	I1226 23:23:05.312294   14940 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1226 23:23:05.312900   14940 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9821031s)
	W1226 23:23:05.312900   14940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1226 23:23:05.326954   14940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 23:23:05.395243   14940 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1226 23:23:05.395243   14940 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0761964s)
	I1226 23:23:05.396211   14940 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1226 23:23:05.396211   14940 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1226 23:23:05.396339   14940 start.go:475] detecting cgroup driver to use...
	I1226 23:23:05.396554   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 23:23:05.429601   14940 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1226 23:23:05.443456   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1226 23:23:05.476860   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1226 23:23:05.492898   14940 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1226 23:23:05.504989   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1226 23:23:05.533288   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 23:23:05.563788   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1226 23:23:05.593129   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 23:23:05.622133   14940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 23:23:05.653707   14940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1226 23:23:05.684451   14940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 23:23:05.701321   14940 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1226 23:23:05.715176   14940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 23:23:05.746832   14940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:23:05.929680   14940 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1226 23:23:05.959022   14940 start.go:475] detecting cgroup driver to use...
	I1226 23:23:05.973118   14940 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1226 23:23:05.993102   14940 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1226 23:23:05.994163   14940 command_runner.go:130] > [Unit]
	I1226 23:23:05.994163   14940 command_runner.go:130] > Description=Docker Application Container Engine
	I1226 23:23:05.994163   14940 command_runner.go:130] > Documentation=https://docs.docker.com
	I1226 23:23:05.994163   14940 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1226 23:23:05.994163   14940 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1226 23:23:05.994163   14940 command_runner.go:130] > StartLimitBurst=3
	I1226 23:23:05.994163   14940 command_runner.go:130] > StartLimitIntervalSec=60
	I1226 23:23:05.994163   14940 command_runner.go:130] > [Service]
	I1226 23:23:05.994163   14940 command_runner.go:130] > Type=notify
	I1226 23:23:05.994163   14940 command_runner.go:130] > Restart=on-failure
	I1226 23:23:05.994163   14940 command_runner.go:130] > Environment=NO_PROXY=172.21.182.57
	I1226 23:23:05.994163   14940 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1226 23:23:05.994163   14940 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1226 23:23:05.994163   14940 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1226 23:23:05.994163   14940 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1226 23:23:05.994163   14940 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1226 23:23:05.994163   14940 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1226 23:23:05.994163   14940 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1226 23:23:05.994163   14940 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1226 23:23:05.994163   14940 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1226 23:23:05.994163   14940 command_runner.go:130] > ExecStart=
	I1226 23:23:05.994163   14940 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1226 23:23:05.994163   14940 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1226 23:23:05.994163   14940 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1226 23:23:05.994163   14940 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1226 23:23:05.994163   14940 command_runner.go:130] > LimitNOFILE=infinity
	I1226 23:23:05.994163   14940 command_runner.go:130] > LimitNPROC=infinity
	I1226 23:23:05.994163   14940 command_runner.go:130] > LimitCORE=infinity
	I1226 23:23:05.994163   14940 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1226 23:23:05.994163   14940 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1226 23:23:05.994163   14940 command_runner.go:130] > TasksMax=infinity
	I1226 23:23:05.994163   14940 command_runner.go:130] > TimeoutStartSec=0
	I1226 23:23:05.994163   14940 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1226 23:23:05.994163   14940 command_runner.go:130] > Delegate=yes
	I1226 23:23:05.994163   14940 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1226 23:23:05.994163   14940 command_runner.go:130] > KillMode=process
	I1226 23:23:05.994163   14940 command_runner.go:130] > [Install]
	I1226 23:23:05.994163   14940 command_runner.go:130] > WantedBy=multi-user.target
	I1226 23:23:06.008099   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 23:23:06.040097   14940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 23:23:06.079091   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 23:23:06.116040   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1226 23:23:06.157147   14940 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1226 23:23:06.220663   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1226 23:23:06.242821   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 23:23:06.273078   14940 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1226 23:23:06.286747   14940 ssh_runner.go:195] Run: which cri-dockerd
	I1226 23:23:06.292803   14940 command_runner.go:130] > /usr/bin/cri-dockerd
	I1226 23:23:06.307049   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1226 23:23:06.325093   14940 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1226 23:23:06.370382   14940 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1226 23:23:06.551335   14940 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1226 23:23:06.711334   14940 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1226 23:23:06.711334   14940 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1226 23:23:06.755460   14940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:23:06.926200   14940 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1226 23:23:08.594083   14940 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6678828s)
	I1226 23:23:08.606539   14940 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1226 23:23:08.789966   14940 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1226 23:23:08.976889   14940 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1226 23:23:09.162811   14940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:23:09.345730   14940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1226 23:23:09.394212   14940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:23:09.574500   14940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1226 23:23:09.689865   14940 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1226 23:23:09.702162   14940 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1226 23:23:09.709171   14940 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1226 23:23:09.709171   14940 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1226 23:23:09.709171   14940 command_runner.go:130] > Device: 16h/22d	Inode: 889         Links: 1
	I1226 23:23:09.709171   14940 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1226 23:23:09.710026   14940 command_runner.go:130] > Access: 2023-12-26 23:23:09.595575506 +0000
	I1226 23:23:09.710026   14940 command_runner.go:130] > Modify: 2023-12-26 23:23:09.595575506 +0000
	I1226 23:23:09.710026   14940 command_runner.go:130] > Change: 2023-12-26 23:23:09.599575506 +0000
	I1226 23:23:09.710026   14940 command_runner.go:130] >  Birth: -
	I1226 23:23:09.710238   14940 start.go:543] Will wait 60s for crictl version
	I1226 23:23:09.724381   14940 ssh_runner.go:195] Run: which crictl
	I1226 23:23:09.728364   14940 command_runner.go:130] > /usr/bin/crictl
	I1226 23:23:09.742585   14940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 23:23:09.819874   14940 command_runner.go:130] > Version:  0.1.0
	I1226 23:23:09.819874   14940 command_runner.go:130] > RuntimeName:  docker
	I1226 23:23:09.819874   14940 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1226 23:23:09.819974   14940 command_runner.go:130] > RuntimeApiVersion:  v1
	I1226 23:23:09.819974   14940 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1226 23:23:09.830702   14940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1226 23:23:09.867621   14940 command_runner.go:130] > 24.0.7
	I1226 23:23:09.876623   14940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1226 23:23:09.912632   14940 command_runner.go:130] > 24.0.7
	I1226 23:23:09.917182   14940 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1226 23:23:09.919708   14940 out.go:177]   - env NO_PROXY=172.21.182.57
	I1226 23:23:09.922104   14940 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1226 23:23:09.925927   14940 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1226 23:23:09.925927   14940 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1226 23:23:09.925927   14940 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1226 23:23:09.925927   14940 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4e:ec:d4 Flags:up|broadcast|multicast|running}
	I1226 23:23:09.929289   14940 ip.go:210] interface addr: fe80::1f69:6bdb:2000:8fcd/64
	I1226 23:23:09.929289   14940 ip.go:210] interface addr: 172.21.176.1/20
	I1226 23:23:09.940981   14940 ssh_runner.go:195] Run: grep 172.21.176.1	host.minikube.internal$ /etc/hosts
	I1226 23:23:09.947023   14940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.21.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 23:23:09.972056   14940 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300 for IP: 172.21.184.151
	I1226 23:23:09.972178   14940 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 23:23:09.972968   14940 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1226 23:23:09.972968   14940 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1226 23:23:09.973501   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1226 23:23:09.973922   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1226 23:23:09.974189   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1226 23:23:09.974486   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1226 23:23:09.975562   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem (1338 bytes)
	W1226 23:23:09.976003   14940 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728_empty.pem, impossibly tiny 0 bytes
	I1226 23:23:09.976232   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1226 23:23:09.976755   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1226 23:23:09.977320   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1226 23:23:09.977865   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1226 23:23:09.978956   14940 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem (1708 bytes)
	I1226 23:23:09.979265   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem -> /usr/share/ca-certificates/10728.pem
	I1226 23:23:09.979568   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> /usr/share/ca-certificates/107282.pem
	I1226 23:23:09.979889   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:23:09.980854   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 23:23:10.021837   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 23:23:10.063353   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 23:23:10.106459   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1226 23:23:10.147242   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem --> /usr/share/ca-certificates/10728.pem (1338 bytes)
	I1226 23:23:10.190717   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /usr/share/ca-certificates/107282.pem (1708 bytes)
	I1226 23:23:10.233539   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 23:23:10.290367   14940 ssh_runner.go:195] Run: openssl version
	I1226 23:23:10.299035   14940 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1226 23:23:10.311927   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107282.pem && ln -fs /usr/share/ca-certificates/107282.pem /etc/ssl/certs/107282.pem"
	I1226 23:23:10.353069   14940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107282.pem
	I1226 23:23:10.360111   14940 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 26 22:04 /usr/share/ca-certificates/107282.pem
	I1226 23:23:10.360311   14940 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 26 22:04 /usr/share/ca-certificates/107282.pem
	I1226 23:23:10.374253   14940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107282.pem
	I1226 23:23:10.383405   14940 command_runner.go:130] > 3ec20f2e
	I1226 23:23:10.397586   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107282.pem /etc/ssl/certs/3ec20f2e.0"
	I1226 23:23:10.432447   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 23:23:10.465823   14940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:23:10.472767   14940 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 26 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:23:10.472967   14940 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:23:10.485697   14940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 23:23:10.494265   14940 command_runner.go:130] > b5213941
	I1226 23:23:10.507730   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 23:23:10.540006   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10728.pem && ln -fs /usr/share/ca-certificates/10728.pem /etc/ssl/certs/10728.pem"
	I1226 23:23:10.569799   14940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10728.pem
	I1226 23:23:10.576665   14940 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 26 22:04 /usr/share/ca-certificates/10728.pem
	I1226 23:23:10.576878   14940 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 26 22:04 /usr/share/ca-certificates/10728.pem
	I1226 23:23:10.591393   14940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10728.pem
	I1226 23:23:10.600999   14940 command_runner.go:130] > 51391683
	I1226 23:23:10.614529   14940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10728.pem /etc/ssl/certs/51391683.0"
	I1226 23:23:10.644520   14940 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 23:23:10.651523   14940 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 23:23:10.652112   14940 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 23:23:10.662496   14940 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1226 23:23:10.700781   14940 command_runner.go:130] > cgroupfs
	I1226 23:23:10.701388   14940 cni.go:84] Creating CNI manager for ""
	I1226 23:23:10.701452   14940 cni.go:136] 3 nodes found, recommending kindnet
	I1226 23:23:10.701452   14940 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 23:23:10.701518   14940 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.21.184.151 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-455300 NodeName:multinode-455300-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.21.182.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.21.184.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1226 23:23:10.701824   14940 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.21.184.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-455300-m02"
	  kubeletExtraArgs:
	    node-ip: 172.21.184.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.21.182.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 23:23:10.701962   14940 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-455300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.21.184.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-455300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 23:23:10.715588   14940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1226 23:23:10.734162   14940 command_runner.go:130] > kubeadm
	I1226 23:23:10.734162   14940 command_runner.go:130] > kubectl
	I1226 23:23:10.734162   14940 command_runner.go:130] > kubelet
	I1226 23:23:10.734162   14940 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 23:23:10.746188   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1226 23:23:10.762839   14940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I1226 23:23:10.791594   14940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1226 23:23:10.834342   14940 ssh_runner.go:195] Run: grep 172.21.182.57	control-plane.minikube.internal$ /etc/hosts
	I1226 23:23:10.840293   14940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.21.182.57	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 23:23:10.858813   14940 host.go:66] Checking if "multinode-455300" exists ...
	I1226 23:23:10.858951   14940 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:23:10.858951   14940 start.go:304] JoinCluster: &{Name:multinode-455300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-455300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.21.182.57 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.21.184.151 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.21.188.21 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 23:23:10.859566   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1226 23:23:10.859653   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:23:13.050673   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:23:13.050673   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:13.050792   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:23:15.670171   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:23:15.670171   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:15.670505   14940 sshutil.go:53] new ssh client: &{IP:172.21.182.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 23:23:15.891033   14940 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token e29sv0.49niog2zfjqw7ep9 --discovery-token-ca-cert-hash sha256:59aae3d8e090ab5d371ccfd9b23acdfdebb77e43326e89bce43646e9b1304925 
	I1226 23:23:15.891137   14940 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (5.031515s)
	I1226 23:23:15.891137   14940 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.21.184.151 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1226 23:23:15.891251   14940 host.go:66] Checking if "multinode-455300" exists ...
	I1226 23:23:15.906115   14940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-455300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1226 23:23:15.906115   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:23:18.083433   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:23:18.083602   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:18.083602   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:23:20.693709   14940 main.go:141] libmachine: [stdout =====>] : 172.21.182.57
	
	I1226 23:23:20.693709   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:20.694038   14940 sshutil.go:53] new ssh client: &{IP:172.21.182.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 23:23:20.879991   14940 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1226 23:23:20.977635   14940 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-zt55b, kube-system/kube-proxy-bqlf8
	I1226 23:23:23.015222   14940 command_runner.go:130] > node/multinode-455300-m02 cordoned
	I1226 23:23:23.015222   14940 command_runner.go:130] > pod "busybox-5bc68d56bd-bskhd" has DeletionTimestamp older than 1 seconds, skipping
	I1226 23:23:23.015222   14940 command_runner.go:130] > node/multinode-455300-m02 drained
	I1226 23:23:23.015222   14940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-455300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (7.1091083s)
	I1226 23:23:23.015345   14940 node.go:108] successfully drained node "m02"
	I1226 23:23:23.016613   14940 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:23:23.017563   14940 kapi.go:59] client config for multinode-455300: &rest.Config{Host:"https://172.21.182.57:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 23:23:23.018649   14940 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1226 23:23:23.018950   14940 round_trippers.go:463] DELETE https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:23.018950   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:23.019013   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:23.019013   14940 round_trippers.go:473]     Content-Type: application/json
	I1226 23:23:23.019013   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:23.044652   14940 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I1226 23:23:23.044652   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:23.044652   14940 round_trippers.go:580]     Audit-Id: 0dc0e723-0792-4aaf-90d1-86b99175594b
	I1226 23:23:23.044652   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:23.044652   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:23.044652   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:23.044652   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:23.044946   14940 round_trippers.go:580]     Content-Length: 171
	I1226 23:23:23.044946   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:23 GMT
	I1226 23:23:23.045025   14940 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-455300-m02","kind":"nodes","uid":"e1420937-2f10-4cda-99cd-fa0c31e0c38d"}}
	I1226 23:23:23.045086   14940 node.go:124] successfully deleted node "m02"
	I1226 23:23:23.045162   14940 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.21.184.151 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1226 23:23:23.045231   14940 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.21.184.151 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1226 23:23:23.045231   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e29sv0.49niog2zfjqw7ep9 --discovery-token-ca-cert-hash sha256:59aae3d8e090ab5d371ccfd9b23acdfdebb77e43326e89bce43646e9b1304925 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-455300-m02"
	I1226 23:23:23.314053   14940 command_runner.go:130] ! W1226 23:23:23.312501    1365 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1226 23:23:23.935875   14940 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 23:23:25.778060   14940 command_runner.go:130] > [preflight] Running pre-flight checks
	I1226 23:23:25.778060   14940 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1226 23:23:25.778060   14940 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1226 23:23:25.778060   14940 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 23:23:25.778060   14940 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 23:23:25.778060   14940 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1226 23:23:25.778060   14940 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1226 23:23:25.778060   14940 command_runner.go:130] > This node has joined the cluster:
	I1226 23:23:25.778060   14940 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1226 23:23:25.778060   14940 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1226 23:23:25.778060   14940 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1226 23:23:25.778060   14940 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e29sv0.49niog2zfjqw7ep9 --discovery-token-ca-cert-hash sha256:59aae3d8e090ab5d371ccfd9b23acdfdebb77e43326e89bce43646e9b1304925 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-455300-m02": (2.7327815s)
	I1226 23:23:25.778060   14940 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1226 23:23:26.066067   14940 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1226 23:23:26.299693   14940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b minikube.k8s.io/name=multinode-455300 minikube.k8s.io/updated_at=2023_12_26T23_23_26_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 23:23:26.474138   14940 command_runner.go:130] > node/multinode-455300-m02 labeled
	I1226 23:23:26.474221   14940 command_runner.go:130] > node/multinode-455300-m03 labeled
	I1226 23:23:26.474289   14940 start.go:306] JoinCluster complete in 15.6153408s
	I1226 23:23:26.474289   14940 cni.go:84] Creating CNI manager for ""
	I1226 23:23:26.474289   14940 cni.go:136] 3 nodes found, recommending kindnet
	I1226 23:23:26.487930   14940 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1226 23:23:26.497065   14940 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1226 23:23:26.497255   14940 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1226 23:23:26.497255   14940 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1226 23:23:26.497255   14940 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 23:23:26.497255   14940 command_runner.go:130] > Access: 2023-12-26 23:19:30.718927400 +0000
	I1226 23:23:26.497368   14940 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I1226 23:23:26.497368   14940 command_runner.go:130] > Change: 2023-12-26 23:19:18.490000000 +0000
	I1226 23:23:26.497368   14940 command_runner.go:130] >  Birth: -
	I1226 23:23:26.497512   14940 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1226 23:23:26.497512   14940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1226 23:23:26.559138   14940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1226 23:23:27.095450   14940 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1226 23:23:27.095553   14940 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1226 23:23:27.095553   14940 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1226 23:23:27.095553   14940 command_runner.go:130] > daemonset.apps/kindnet configured
	I1226 23:23:27.097264   14940 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:23:27.098532   14940 kapi.go:59] client config for multinode-455300: &rest.Config{Host:"https://172.21.182.57:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 23:23:27.100098   14940 round_trippers.go:463] GET https://172.21.182.57:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1226 23:23:27.100098   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:27.100098   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:27.100098   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:27.108700   14940 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1226 23:23:27.109076   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:27.109076   14940 round_trippers.go:580]     Audit-Id: 0915df5d-6a11-459c-aa74-f686939ee533
	I1226 23:23:27.109076   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:27.109137   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:27.109137   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:27.109137   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:27.109137   14940 round_trippers.go:580]     Content-Length: 292
	I1226 23:23:27.109202   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:27 GMT
	I1226 23:23:27.109202   14940 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d040dd96-d104-4852-b930-38d82a1c4e71","resourceVersion":"1867","creationTimestamp":"2023-12-26T22:58:16Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1226 23:23:27.109421   14940 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-455300" context rescaled to 1 replicas
	I1226 23:23:27.109421   14940 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.21.184.151 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1226 23:23:27.111996   14940 out.go:177] * Verifying Kubernetes components...
	I1226 23:23:27.127989   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 23:23:27.152060   14940 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:23:27.152881   14940 kapi.go:59] client config for multinode-455300: &rest.Config{Host:"https://172.21.182.57:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-455300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 23:23:27.153771   14940 node_ready.go:35] waiting up to 6m0s for node "multinode-455300-m02" to be "Ready" ...
	I1226 23:23:27.153771   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:27.153771   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:27.153771   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:27.153771   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:27.165360   14940 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1226 23:23:27.165360   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:27.165360   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:27.165360   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:27.165360   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:27.165360   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:27 GMT
	I1226 23:23:27.165360   14940 round_trippers.go:580]     Audit-Id: f91baa4d-72f2-40df-89d4-56a0cf0559a8
	I1226 23:23:27.165360   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:27.165360   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2019","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3559 chars]
	I1226 23:23:27.668456   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:27.668456   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:27.668544   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:27.668544   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:27.673011   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:27.673011   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:27.673011   14940 round_trippers.go:580]     Audit-Id: 75f1d71d-cf6e-4a18-b949-03552444d86f
	I1226 23:23:27.673011   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:27.673011   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:27.673011   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:27.673011   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:27.673134   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:27 GMT
	I1226 23:23:27.673300   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2019","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3559 chars]
	I1226 23:23:28.173271   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:28.173271   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:28.173271   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:28.173271   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:28.177990   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:28.177990   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:28.178230   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:28.178230   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:28.178230   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:28 GMT
	I1226 23:23:28.178230   14940 round_trippers.go:580]     Audit-Id: 7138b214-5c64-4341-b442-37c1cb5605e0
	I1226 23:23:28.178230   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:28.178348   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:28.178541   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:28.657210   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:28.657456   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:28.657456   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:28.657456   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:28.665033   14940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 23:23:28.665123   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:28.665123   14940 round_trippers.go:580]     Audit-Id: fe4bade5-06a6-4cd8-8eb5-c7e60d4baaa3
	I1226 23:23:28.665123   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:28.665123   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:28.665123   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:28.665123   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:28.665200   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:28 GMT
	I1226 23:23:28.665200   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:29.160911   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:29.160911   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:29.160911   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:29.161001   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:29.165154   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:29.165154   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:29.165154   14940 round_trippers.go:580]     Audit-Id: a3fcdc92-0671-4046-824c-330b5773d3e3
	I1226 23:23:29.166116   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:29.166116   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:29.166116   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:29.166162   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:29.166162   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:29 GMT
	I1226 23:23:29.166235   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:29.166235   14940 node_ready.go:58] node "multinode-455300-m02" has status "Ready":"False"
	I1226 23:23:29.661579   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:29.661579   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:29.661579   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:29.661579   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:29.664982   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:29.665941   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:29.665941   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:29.665941   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:29.665941   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:29 GMT
	I1226 23:23:29.665941   14940 round_trippers.go:580]     Audit-Id: 930bf22f-5776-43cf-ae07-87ce837252e6
	I1226 23:23:29.665941   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:29.665941   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:29.666092   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:30.162411   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:30.162541   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:30.162541   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:30.162541   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:30.166914   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:30.166914   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:30.166914   14940 round_trippers.go:580]     Audit-Id: ac7524ba-66fb-428c-bac0-7f399fb0bd82
	I1226 23:23:30.166914   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:30.166914   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:30.167702   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:30.167702   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:30.167702   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:30 GMT
	I1226 23:23:30.167895   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:30.664551   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:30.664634   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:30.664743   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:30.664743   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:30.668620   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:30.668932   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:30.668932   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:30 GMT
	I1226 23:23:30.668932   14940 round_trippers.go:580]     Audit-Id: 5e86745f-d4b4-42f8-800a-571d6080df73
	I1226 23:23:30.668932   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:30.668932   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:30.668932   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:30.668932   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:30.669218   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:31.156986   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:31.157044   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:31.157044   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:31.157044   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:31.160638   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:31.160638   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:31.161244   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:31.161244   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:31.161244   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:31.161244   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:31 GMT
	I1226 23:23:31.161244   14940 round_trippers.go:580]     Audit-Id: 9a837fa0-f630-4b6a-b359-37cd3b16a4ac
	I1226 23:23:31.161244   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:31.161520   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:31.660028   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:31.660103   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:31.660103   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:31.660103   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:31.665855   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:23:31.665977   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:31.665977   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:31.666071   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:31 GMT
	I1226 23:23:31.666071   14940 round_trippers.go:580]     Audit-Id: 1a03cd83-1d04-4ec4-823b-a93045150fd7
	I1226 23:23:31.666071   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:31.666071   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:31.666131   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:31.666178   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:31.667062   14940 node_ready.go:58] node "multinode-455300-m02" has status "Ready":"False"
	I1226 23:23:32.164347   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:32.164347   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:32.164347   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:32.164347   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:32.168871   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:32.168871   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:32.168871   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:32.168871   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:32.169147   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:32.169147   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:32 GMT
	I1226 23:23:32.169147   14940 round_trippers.go:580]     Audit-Id: eb514e48-2845-4f1a-b887-400efdb9e1de
	I1226 23:23:32.169147   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:32.169417   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:32.667712   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:32.667712   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:32.667712   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:32.667712   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:32.672308   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:32.672308   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:32.673084   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:32.673195   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:32.673271   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:32 GMT
	I1226 23:23:32.673414   14940 round_trippers.go:580]     Audit-Id: 20f9fbbb-61ca-453e-bbe1-7471493a3232
	I1226 23:23:32.673414   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:32.673414   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:32.673414   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:33.168407   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:33.168407   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:33.168513   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:33.168513   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:33.172832   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:33.172832   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:33.172832   14940 round_trippers.go:580]     Audit-Id: 1b94f620-7106-4a74-82aa-2ee0481c416a
	I1226 23:23:33.172832   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:33.172832   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:33.172832   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:33.173772   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:33.173772   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:33 GMT
	I1226 23:23:33.173895   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:33.668483   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:33.668483   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:33.668483   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:33.668483   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:33.672100   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:33.672100   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:33.672100   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:33.673061   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:33.673061   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:33.673061   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:33 GMT
	I1226 23:23:33.673061   14940 round_trippers.go:580]     Audit-Id: e2390716-05e8-4266-acec-533770fce369
	I1226 23:23:33.673061   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:33.673135   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:33.673770   14940 node_ready.go:58] node "multinode-455300-m02" has status "Ready":"False"
	I1226 23:23:34.167453   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:34.167453   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:34.167534   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:34.167534   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:34.173793   14940 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 23:23:34.173793   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:34.173793   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:34 GMT
	I1226 23:23:34.173793   14940 round_trippers.go:580]     Audit-Id: 4f324aee-825b-4cb6-bc59-cc4d9df052c8
	I1226 23:23:34.173793   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:34.173793   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:34.173793   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:34.173793   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:34.173793   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:34.667996   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:34.667996   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:34.667996   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:34.667996   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:34.673019   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:34.673019   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:34.673019   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:34 GMT
	I1226 23:23:34.673019   14940 round_trippers.go:580]     Audit-Id: ca8ae0ed-e4c6-4b9c-a5bf-a90764826254
	I1226 23:23:34.673019   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:34.673019   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:34.673173   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:34.673173   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:34.673437   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2026","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3668 chars]
	I1226 23:23:35.157498   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:35.157624   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.157624   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.157624   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.162032   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:35.162098   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.162098   14940 round_trippers.go:580]     Audit-Id: a10b0910-0d46-4891-81d0-b0169f1b015a
	I1226 23:23:35.162098   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.162098   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.162098   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.162098   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.162173   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.162396   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2045","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3926 chars]
	I1226 23:23:35.163054   14940 node_ready.go:49] node "multinode-455300-m02" has status "Ready":"True"
	I1226 23:23:35.163127   14940 node_ready.go:38] duration metric: took 8.0092841s waiting for node "multinode-455300-m02" to be "Ready" ...
	I1226 23:23:35.163127   14940 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 23:23:35.163238   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods
	I1226 23:23:35.163407   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.163407   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.163407   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.169027   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:23:35.169799   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.171354   14940 round_trippers.go:580]     Audit-Id: 895f048d-d416-4dcc-bb96-e88162832909
	I1226 23:23:35.171354   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.171354   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.171354   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.171354   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.171354   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.173318   14940 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2047"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1863","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83385 chars]
	I1226 23:23:35.177313   14940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.178195   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fj9bd
	I1226 23:23:35.178195   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.178195   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.178195   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.183300   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:23:35.184176   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.184254   14940 round_trippers.go:580]     Audit-Id: 188c90c0-665f-4196-9be5-e35d15a33c2d
	I1226 23:23:35.184254   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.184254   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.184285   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.184285   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.184285   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.184495   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fj9bd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc5229e-2af2-4e17-b23c-ebf836a42aa2","resourceVersion":"1863","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"91ebbb0b-42ef-4e50-952a-89feccfc96bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ebbb0b-42ef-4e50-952a-89feccfc96bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I1226 23:23:35.184714   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:23:35.184714   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.184714   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.184714   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.189327   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:35.189327   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.189327   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.189327   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.189327   14940 round_trippers.go:580]     Audit-Id: c517bdfc-5a8e-4f03-996f-5284055b2f3f
	I1226 23:23:35.189327   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.190202   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.190202   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.190662   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:23:35.191108   14940 pod_ready.go:92] pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace has status "Ready":"True"
	I1226 23:23:35.191108   14940 pod_ready.go:81] duration metric: took 13.7957ms waiting for pod "coredns-5dd5756b68-fj9bd" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.191108   14940 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.191108   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-455300
	I1226 23:23:35.191108   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.191108   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.191108   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.194708   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:35.194708   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.194708   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.194920   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.194920   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.194920   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.194920   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.194920   14940 round_trippers.go:580]     Audit-Id: 16f13b84-9f66-4136-b3d9-7377653aeeff
	I1226 23:23:35.195150   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-455300","namespace":"kube-system","uid":"cfd4d580-b2a8-4ff1-8d3c-c6b50f9bf86e","resourceVersion":"1834","creationTimestamp":"2023-12-26T23:21:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.21.182.57:2379","kubernetes.io/config.hash":"d4e365efddd1bd5d58716ef3ab8705bc","kubernetes.io/config.mirror":"d4e365efddd1bd5d58716ef3ab8705bc","kubernetes.io/config.seen":"2023-12-26T23:20:52.614240428Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:21:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I1226 23:23:35.195222   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:23:35.195222   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.195222   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.195222   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.197833   14940 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 23:23:35.197833   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.197833   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.197833   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.197833   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.198777   14940 round_trippers.go:580]     Audit-Id: 6633b134-105e-4a9a-9f93-724b2b514eb9
	I1226 23:23:35.198777   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.198777   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.199024   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:23:35.199024   14940 pod_ready.go:92] pod "etcd-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:23:35.199024   14940 pod_ready.go:81] duration metric: took 7.9152ms waiting for pod "etcd-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.199024   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.199602   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-455300
	I1226 23:23:35.199602   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.199602   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.199602   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.202610   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:35.202610   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.202610   14940 round_trippers.go:580]     Audit-Id: f9cbf07f-5190-4ed0-8d2c-9d73606970af
	I1226 23:23:35.202610   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.203675   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.203675   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.203675   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.203675   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.204614   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-455300","namespace":"kube-system","uid":"bbe5516b-f745-4a20-8df3-3cd3ac15d7f6","resourceVersion":"1836","creationTimestamp":"2023-12-26T23:21:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.21.182.57:8443","kubernetes.io/config.hash":"3c3021eb66c7ec14c35bcec06843a329","kubernetes.io/config.mirror":"3c3021eb66c7ec14c35bcec06843a329","kubernetes.io/config.seen":"2023-12-26T23:20:52.614245928Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:21:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I1226 23:23:35.204614   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:23:35.204614   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.204614   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.204614   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.208605   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:35.208815   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.208815   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.208815   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.208815   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.208815   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.208815   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.208815   14940 round_trippers.go:580]     Audit-Id: 0f0fc1a0-8d01-41d1-8334-265b1098df77
	I1226 23:23:35.208887   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:23:35.208887   14940 pod_ready.go:92] pod "kube-apiserver-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:23:35.208887   14940 pod_ready.go:81] duration metric: took 9.863ms waiting for pod "kube-apiserver-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.208887   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.209449   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-455300
	I1226 23:23:35.209449   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.209449   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.209449   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.212524   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:35.212892   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.212892   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.212892   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.212892   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.212892   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.212892   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.212983   14940 round_trippers.go:580]     Audit-Id: 71485d2a-cb4b-4806-9cc6-2e72e1471ca9
	I1226 23:23:35.213300   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-455300","namespace":"kube-system","uid":"fdaf236b-e792-4278-908c-34b337b97beb","resourceVersion":"1844","creationTimestamp":"2023-12-26T22:58:13Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"390e7b7338e932006306396347e13bcf","kubernetes.io/config.mirror":"390e7b7338e932006306396347e13bcf","kubernetes.io/config.seen":"2023-12-26T22:58:06.456140564Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I1226 23:23:35.213805   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:23:35.213805   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.213805   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.213805   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.217436   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:35.217525   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.217525   14940 round_trippers.go:580]     Audit-Id: 459e9c88-a1b0-488b-a825-652a06a7c1ac
	I1226 23:23:35.217525   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.217525   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.217525   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.217583   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.217583   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.217771   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:23:35.218238   14940 pod_ready.go:92] pod "kube-controller-manager-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:23:35.218298   14940 pod_ready.go:81] duration metric: took 9.4115ms waiting for pod "kube-controller-manager-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.218353   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2pfcl" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.359187   14940 request.go:629] Waited for 140.7457ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pfcl
	I1226 23:23:35.359505   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pfcl
	I1226 23:23:35.359505   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.359505   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.359505   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.364854   14940 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1226 23:23:35.364937   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.364937   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.364937   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.364937   14940 round_trippers.go:580]     Audit-Id: 0a39eb0c-2aca-4849-bc6e-ead8d68962f8
	I1226 23:23:35.364937   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.364937   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.364937   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.365477   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2pfcl","generateName":"kube-proxy-","namespace":"kube-system","uid":"61b5d2fb-802c-4b84-b7fa-7a7e9e024028","resourceVersion":"1897","creationTimestamp":"2023-12-26T23:06:10Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:06:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5972 chars]
	I1226 23:23:35.562262   14940 request.go:629] Waited for 195.6718ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m03
	I1226 23:23:35.562581   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m03
	I1226 23:23:35.562581   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.562581   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.562581   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.565784   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:35.565784   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.565784   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.565784   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.565784   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.565784   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.565784   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.565784   14940 round_trippers.go:580]     Audit-Id: a2168479-0369-4f76-ad17-da8dc1ea5a38
	I1226 23:23:35.566813   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m03","uid":"ef364efe-5dc7-4fb4-bc7c-76a3eaa41ba4","resourceVersion":"2020","creationTimestamp":"2023-12-26T23:16:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:16:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4392 chars]
	I1226 23:23:35.567349   14940 pod_ready.go:97] node "multinode-455300-m03" hosting pod "kube-proxy-2pfcl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300-m03" has status "Ready":"Unknown"
	I1226 23:23:35.567447   14940 pod_ready.go:81] duration metric: took 349.0941ms waiting for pod "kube-proxy-2pfcl" in "kube-system" namespace to be "Ready" ...
	E1226 23:23:35.567490   14940 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-455300-m03" hosting pod "kube-proxy-2pfcl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-455300-m03" has status "Ready":"Unknown"
	I1226 23:23:35.567490   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqlf8" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.766959   14940 request.go:629] Waited for 199.3508ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqlf8
	I1226 23:23:35.767408   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqlf8
	I1226 23:23:35.767408   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.767408   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.767408   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.775150   14940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 23:23:35.775150   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.775150   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.775289   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.775289   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.775289   14940 round_trippers.go:580]     Audit-Id: e5ea6ac2-3d81-4a66-a42c-c4775bf6e8ea
	I1226 23:23:35.775289   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.775289   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.775632   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bqlf8","generateName":"kube-proxy-","namespace":"kube-system","uid":"1caff24c-909f-42a9-a4b8-d9c8c1ec8828","resourceVersion":"2030","creationTimestamp":"2023-12-26T23:01:32Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:01:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5751 chars]
	I1226 23:23:35.967005   14940 request.go:629] Waited for 190.7404ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:35.967345   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300-m02
	I1226 23:23:35.967345   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:35.967420   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:35.967420   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:35.971089   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:35.971846   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:35.971846   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:35.971846   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:35 GMT
	I1226 23:23:35.971846   14940 round_trippers.go:580]     Audit-Id: 1871250c-0b61-462a-8acf-97cd12a37cb0
	I1226 23:23:35.971846   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:35.971846   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:35.971846   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:35.972323   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300-m02","uid":"a4467645-9faa-4896-ae92-260c4b8343b5","resourceVersion":"2045","creationTimestamp":"2023-12-26T23:23:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T23_23_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T23:23:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3926 chars]
	I1226 23:23:35.972793   14940 pod_ready.go:92] pod "kube-proxy-bqlf8" in "kube-system" namespace has status "Ready":"True"
	I1226 23:23:35.972921   14940 pod_ready.go:81] duration metric: took 405.4316ms waiting for pod "kube-proxy-bqlf8" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:35.972921   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hzcqb" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:36.162478   14940 request.go:629] Waited for 189.164ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzcqb
	I1226 23:23:36.162729   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzcqb
	I1226 23:23:36.162729   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:36.162729   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:36.162729   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:36.169803   14940 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 23:23:36.169803   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:36.169803   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:36.169803   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:36.169803   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:36.169803   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:36 GMT
	I1226 23:23:36.169803   14940 round_trippers.go:580]     Audit-Id: 46e4cf6b-a084-4670-aa27-ffe2fecaa858
	I1226 23:23:36.169803   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:36.169803   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hzcqb","generateName":"kube-proxy-","namespace":"kube-system","uid":"0027fd42-fa64-4d1d-acc8-36e7b41e4838","resourceVersion":"1829","creationTimestamp":"2023-12-26T22:58:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6bc57d5e-2df3-42da-9d90-0e388adf0201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bc57d5e-2df3-42da-9d90-0e388adf0201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I1226 23:23:36.370126   14940 request.go:629] Waited for 199.1839ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:23:36.370126   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:23:36.370126   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:36.370126   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:36.370126   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:36.374528   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:36.375521   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:36.375521   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:36 GMT
	I1226 23:23:36.375521   14940 round_trippers.go:580]     Audit-Id: 413120ef-0209-47d1-aa3d-b0b82aa3ea57
	I1226 23:23:36.375604   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:36.375604   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:36.375660   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:36.375660   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:36.375660   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:23:36.376279   14940 pod_ready.go:92] pod "kube-proxy-hzcqb" in "kube-system" namespace has status "Ready":"True"
	I1226 23:23:36.376279   14940 pod_ready.go:81] duration metric: took 403.3584ms waiting for pod "kube-proxy-hzcqb" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:36.376279   14940 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:36.571563   14940 request.go:629] Waited for 195.2841ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-455300
	I1226 23:23:36.571763   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-455300
	I1226 23:23:36.571861   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:36.571861   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:36.571861   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:36.584905   14940 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1226 23:23:36.584905   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:36.584905   14940 round_trippers.go:580]     Audit-Id: d81ba852-9041-4395-b9af-17dbf875cb21
	I1226 23:23:36.584905   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:36.584905   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:36.584905   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:36.584905   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:36.584905   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:36 GMT
	I1226 23:23:36.584905   14940 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-455300","namespace":"kube-system","uid":"58252b0c-41ef-43ab-b2e8-4bd2a1b21cb1","resourceVersion":"1839","creationTimestamp":"2023-12-26T22:58:17Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ebadb7d5fc522150603669ea98264147","kubernetes.io/config.mirror":"ebadb7d5fc522150603669ea98264147","kubernetes.io/config.seen":"2023-12-26T22:58:16.785831210Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:58:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I1226 23:23:36.757415   14940 request.go:629] Waited for 171.2824ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:23:36.757475   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes/multinode-455300
	I1226 23:23:36.757475   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:36.757475   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:36.757475   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:36.761060   14940 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 23:23:36.761060   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:36.761060   14940 round_trippers.go:580]     Audit-Id: 55ad0ca4-774f-45a7-8226-5c97f23a3511
	I1226 23:23:36.761060   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:36.761060   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:36.761060   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:36.761060   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:36.761060   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:36 GMT
	I1226 23:23:36.762319   14940 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-26T22:58:12Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I1226 23:23:36.762808   14940 pod_ready.go:92] pod "kube-scheduler-multinode-455300" in "kube-system" namespace has status "Ready":"True"
	I1226 23:23:36.762905   14940 pod_ready.go:81] duration metric: took 386.6254ms waiting for pod "kube-scheduler-multinode-455300" in "kube-system" namespace to be "Ready" ...
	I1226 23:23:36.762905   14940 pod_ready.go:38] duration metric: took 1.5997784s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 23:23:36.762973   14940 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 23:23:36.777080   14940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 23:23:36.800542   14940 system_svc.go:56] duration metric: took 37.5686ms WaitForService to wait for kubelet.
	I1226 23:23:36.800710   14940 kubeadm.go:581] duration metric: took 9.6912905s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 23:23:36.800710   14940 node_conditions.go:102] verifying NodePressure condition ...
	I1226 23:23:36.963211   14940 request.go:629] Waited for 162.1156ms due to client-side throttling, not priority and fairness, request: GET:https://172.21.182.57:8443/api/v1/nodes
	I1226 23:23:36.963296   14940 round_trippers.go:463] GET https://172.21.182.57:8443/api/v1/nodes
	I1226 23:23:36.963296   14940 round_trippers.go:469] Request Headers:
	I1226 23:23:36.963296   14940 round_trippers.go:473]     Accept: application/json, */*
	I1226 23:23:36.963296   14940 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1226 23:23:36.967888   14940 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 23:23:36.968038   14940 round_trippers.go:577] Response Headers:
	I1226 23:23:36.968038   14940 round_trippers.go:580]     Date: Tue, 26 Dec 2023 23:23:36 GMT
	I1226 23:23:36.968038   14940 round_trippers.go:580]     Audit-Id: 71658490-be01-4f4e-b61e-d65443e2967b
	I1226 23:23:36.968038   14940 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 23:23:36.968038   14940 round_trippers.go:580]     Content-Type: application/json
	I1226 23:23:36.968038   14940 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 40ab96db-c9ee-48cb-b53e-2db9dbde1313
	I1226 23:23:36.968038   14940 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df818f11-3261-4c5c-a4c3-4c48c6a7a15e
	I1226 23:23:36.968373   14940 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2048"},"items":[{"metadata":{"name":"multinode-455300","uid":"ef23e250-9c4d-41e3-b7d0-88acce6c0b8e","resourceVersion":"1849","creationTimestamp":"2023-12-26T22:58:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-455300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-455300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_58_18_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15593 chars]
	I1226 23:23:36.970078   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:23:36.970142   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:23:36.970142   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:23:36.970206   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:23:36.970206   14940 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1226 23:23:36.970206   14940 node_conditions.go:123] node cpu capacity is 2
	I1226 23:23:36.970206   14940 node_conditions.go:105] duration metric: took 169.4959ms to run NodePressure ...
	I1226 23:23:36.970206   14940 start.go:228] waiting for startup goroutines ...
	I1226 23:23:36.970283   14940 start.go:242] writing updated cluster config ...
	I1226 23:23:36.985993   14940 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:23:36.985993   14940 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 23:23:36.995074   14940 out.go:177] * Starting worker node multinode-455300-m03 in cluster multinode-455300
	I1226 23:23:36.997925   14940 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 23:23:36.997925   14940 cache.go:56] Caching tarball of preloaded images
	I1226 23:23:36.997925   14940 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1226 23:23:36.998613   14940 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1226 23:23:36.998839   14940 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 23:23:37.001559   14940 start.go:365] acquiring machines lock for multinode-455300-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1226 23:23:37.001675   14940 start.go:369] acquired machines lock for "multinode-455300-m03" in 115.3µs
	I1226 23:23:37.001806   14940 start.go:96] Skipping create...Using existing machine configuration
	I1226 23:23:37.001933   14940 fix.go:54] fixHost starting: m03
	I1226 23:23:37.002483   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:23:39.160105   14940 main.go:141] libmachine: [stdout =====>] : Off
	
	I1226 23:23:39.160105   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:39.160105   14940 fix.go:102] recreateIfNeeded on multinode-455300-m03: state=Stopped err=<nil>
	W1226 23:23:39.160105   14940 fix.go:128] unexpected machine state, will restart: <nil>
	I1226 23:23:39.163193   14940 out.go:177] * Restarting existing hyperv VM for "multinode-455300-m03" ...
	I1226 23:23:39.166460   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-455300-m03
	I1226 23:23:41.651376   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:23:41.651376   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:41.651468   14940 main.go:141] libmachine: Waiting for host to start...
	I1226 23:23:41.651468   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:23:43.947790   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:23:43.947951   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:43.947951   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:23:46.484329   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:23:46.484516   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:47.486564   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:23:49.705624   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:23:49.705882   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:49.705977   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:23:52.246896   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:23:52.246896   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:53.247632   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:23:55.477719   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:23:55.477857   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:55.478013   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:23:58.000021   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:23:58.000094   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:23:59.002274   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:01.246536   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:01.246915   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:01.246915   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:03.858764   14940 main.go:141] libmachine: [stdout =====>] : 
	I1226 23:24:03.858956   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:04.862506   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:07.118287   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:07.118522   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:07.118522   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:09.790601   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:09.790683   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:09.794401   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:11.935592   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:11.935679   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:11.935734   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:14.568010   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:14.568010   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:14.568483   14940 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-455300\config.json ...
	I1226 23:24:14.571901   14940 machine.go:88] provisioning docker machine ...
	I1226 23:24:14.572003   14940 buildroot.go:166] provisioning hostname "multinode-455300-m03"
	I1226 23:24:14.572003   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:16.750847   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:16.751079   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:16.751079   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:19.365525   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:19.365525   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:19.372249   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:24:19.372983   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.8 22 <nil> <nil>}
	I1226 23:24:19.372983   14940 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-455300-m03 && echo "multinode-455300-m03" | sudo tee /etc/hostname
	I1226 23:24:19.535509   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-455300-m03
	
	I1226 23:24:19.535509   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:21.763432   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:21.763801   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:21.763941   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:24.393318   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:24.393318   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:24.398934   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:24:24.400213   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.8 22 <nil> <nil>}
	I1226 23:24:24.400213   14940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-455300-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-455300-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-455300-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 23:24:24.554385   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 23:24:24.554385   14940 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1226 23:24:24.554385   14940 buildroot.go:174] setting up certificates
	I1226 23:24:24.554385   14940 provision.go:83] configureAuth start
	I1226 23:24:24.554385   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:26.764966   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:26.764966   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:26.765071   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:29.419669   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:29.420013   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:29.420013   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:31.634509   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:31.634781   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:31.634781   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:34.267201   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:34.267486   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:34.267486   14940 provision.go:138] copyHostCerts
	I1226 23:24:34.267809   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1226 23:24:34.268027   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1226 23:24:34.268027   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1226 23:24:34.268027   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1226 23:24:34.269964   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1226 23:24:34.270055   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1226 23:24:34.270055   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1226 23:24:34.270821   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1226 23:24:34.272153   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1226 23:24:34.272470   14940 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1226 23:24:34.272514   14940 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1226 23:24:34.272782   14940 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I1226 23:24:34.273832   14940 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-455300-m03 san=[172.21.187.8 172.21.187.8 localhost 127.0.0.1 minikube multinode-455300-m03]
	I1226 23:24:34.425530   14940 provision.go:172] copyRemoteCerts
	I1226 23:24:34.440789   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 23:24:34.440789   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:36.585909   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:36.586160   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:36.586261   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:39.176017   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:39.176017   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:39.176239   14940 sshutil.go:53] new ssh client: &{IP:172.21.187.8 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m03\id_rsa Username:docker}
	I1226 23:24:39.285902   14940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8451139s)
	I1226 23:24:39.285997   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1226 23:24:39.286065   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1226 23:24:39.326967   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1226 23:24:39.327243   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 23:24:39.370919   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1226 23:24:39.370950   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I1226 23:24:39.416979   14940 provision.go:86] duration metric: configureAuth took 14.8625977s
	I1226 23:24:39.416979   14940 buildroot.go:189] setting minikube options for container-runtime
	I1226 23:24:39.417596   14940 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:24:39.417596   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:41.607443   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:41.607443   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:41.607745   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:44.219512   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:44.219701   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:44.225555   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:24:44.226273   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.8 22 <nil> <nil>}
	I1226 23:24:44.226273   14940 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1226 23:24:44.366690   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1226 23:24:44.366764   14940 buildroot.go:70] root file system type: tmpfs
	I1226 23:24:44.366963   14940 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1226 23:24:44.367053   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:46.563580   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:46.563788   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:46.563905   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:49.204936   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:49.204936   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:49.210613   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:24:49.212271   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.8 22 <nil> <nil>}
	I1226 23:24:49.212271   14940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.21.182.57"
	Environment="NO_PROXY=172.21.182.57,172.21.184.151"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1226 23:24:49.375656   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.21.182.57
	Environment=NO_PROXY=172.21.182.57,172.21.184.151
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1226 23:24:49.375656   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:51.550122   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:51.550122   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:51.550254   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:24:54.146951   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:24:54.146951   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:54.153736   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:24:54.154348   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.8 22 <nil> <nil>}
	I1226 23:24:54.154348   14940 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1226 23:24:55.472481   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1226 23:24:55.472699   14940 machine.go:91] provisioned docker machine in 40.9008068s
	I1226 23:24:55.472699   14940 start.go:300] post-start starting for "multinode-455300-m03" (driver="hyperv")
	I1226 23:24:55.472781   14940 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 23:24:55.486340   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 23:24:55.486340   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:24:57.618458   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:24:57.618652   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:24:57.618652   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:25:00.230146   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:25:00.230146   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:25:00.230489   14940 sshutil.go:53] new ssh client: &{IP:172.21.187.8 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m03\id_rsa Username:docker}
	I1226 23:25:00.344746   14940 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8584066s)
	I1226 23:25:00.357947   14940 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 23:25:00.364041   14940 command_runner.go:130] > NAME=Buildroot
	I1226 23:25:00.364041   14940 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I1226 23:25:00.364041   14940 command_runner.go:130] > ID=buildroot
	I1226 23:25:00.364041   14940 command_runner.go:130] > VERSION_ID=2021.02.12
	I1226 23:25:00.364041   14940 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1226 23:25:00.364041   14940 info.go:137] Remote host: Buildroot 2021.02.12
	I1226 23:25:00.364041   14940 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1226 23:25:00.365753   14940 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1226 23:25:00.366888   14940 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> 107282.pem in /etc/ssl/certs
	I1226 23:25:00.366888   14940 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> /etc/ssl/certs/107282.pem
	I1226 23:25:00.380257   14940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 23:25:00.398587   14940 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /etc/ssl/certs/107282.pem (1708 bytes)
	I1226 23:25:00.440460   14940 start.go:303] post-start completed in 4.9676796s
	I1226 23:25:00.440460   14940 fix.go:56] fixHost completed within 1m23.4385434s
	I1226 23:25:00.440460   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:25:02.664761   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:25:02.664761   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:25:02.664761   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:25:05.296012   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:25:05.296205   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:25:05.302146   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:25:05.302911   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.8 22 <nil> <nil>}
	I1226 23:25:05.302911   14940 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1226 23:25:05.443269   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703633105.440781281
	
	I1226 23:25:05.443269   14940 fix.go:206] guest clock: 1703633105.440781281
	I1226 23:25:05.443269   14940 fix.go:219] Guest: 2023-12-26 23:25:05.440781281 +0000 UTC Remote: 2023-12-26 23:25:00.4404603 +0000 UTC m=+367.264342901 (delta=5.000320981s)
	I1226 23:25:05.443269   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:25:07.653272   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:25:07.653345   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:25:07.653345   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	I1226 23:25:10.305307   14940 main.go:141] libmachine: [stdout =====>] : 172.21.187.8
	
	I1226 23:25:10.305307   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:25:10.311381   14940 main.go:141] libmachine: Using SSH client type: native
	I1226 23:25:10.311573   14940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.187.8 22 <nil> <nil>}
	I1226 23:25:10.312131   14940 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1703633105
	I1226 23:25:10.463031   14940 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 26 23:25:05 UTC 2023
	
	I1226 23:25:10.463031   14940 fix.go:226] clock set: Tue Dec 26 23:25:05 UTC 2023
	 (err=<nil>)
	I1226 23:25:10.463031   14940 start.go:83] releasing machines lock for "multinode-455300-m03", held for 1m33.4612432s
	I1226 23:25:10.463031   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:25:12.692253   14940 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:25:12.692253   14940 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:25:12.692357   14940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m03 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	-- Journal begins at Tue 2023-12-26 23:19:21 UTC, ends at Tue 2023-12-26 23:25:36 UTC. --
	Dec 26 23:21:16 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:16.971477431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 23:21:16 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:16.971497631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 26 23:21:16 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:16.971508431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 23:21:16 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:16.983395032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 26 23:21:16 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:16.983455131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 23:21:16 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:16.983472531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 26 23:21:16 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:16.983483331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 23:21:17 multinode-455300 cri-dockerd[1262]: time="2023-12-26T23:21:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/597df0e084d00a4a1514de455d5964afe2fb76bf2b0c0e1b268d18dc56917b75/resolv.conf as [nameserver 172.21.176.1]"
	Dec 26 23:21:17 multinode-455300 cri-dockerd[1262]: time="2023-12-26T23:21:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f39c07ef44374ee7585169e028e45172f3a9cd26c12a572d70a11031b48a5c05/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 26 23:21:17 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:17.934944408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 26 23:21:17 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:17.935083406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 23:21:17 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:17.935662497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 26 23:21:17 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:17.935802695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 23:21:18 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:18.161360419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 26 23:21:18 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:18.161601415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 23:21:18 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:18.161796312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 26 23:21:18 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:18.161986809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 23:21:33 multinode-455300 dockerd[1041]: time="2023-12-26T23:21:33.499418269Z" level=info msg="ignoring event" container=8649a5e0dffb84da6aa50e3b5b5290c0b974c6383c5e7105e630994dc7813a7e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:21:33 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:33.500747762Z" level=info msg="shim disconnected" id=8649a5e0dffb84da6aa50e3b5b5290c0b974c6383c5e7105e630994dc7813a7e namespace=moby
	Dec 26 23:21:33 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:33.500831961Z" level=warning msg="cleaning up after shim disconnected" id=8649a5e0dffb84da6aa50e3b5b5290c0b974c6383c5e7105e630994dc7813a7e namespace=moby
	Dec 26 23:21:33 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:33.500879061Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 26 23:21:46 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:46.906198489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 26 23:21:46 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:46.906939188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 26 23:21:46 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:46.906975388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 26 23:21:46 multinode-455300 dockerd[1047]: time="2023-12-26T23:21:46.907975485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cf7c027a48c6e       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       2                   7166c0155a91c       storage-provisioner
	6842a79b256e2       8c811b4aec35f                                                                                         4 minutes ago       Running             busybox                   1                   f39c07ef44374       busybox-5bc68d56bd-flvvn
	27e300ad48bf6       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   1                   597df0e084d00       coredns-5dd5756b68-fj9bd
	ec9e30846687d       c7d1297425461                                                                                         4 minutes ago       Running             kindnet-cni               1                   8d35261daba4b       kindnet-zxd45
	8649a5e0dffb8       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       1                   7166c0155a91c       storage-provisioner
	0d485f58415f2       83f6cc407eed8                                                                                         4 minutes ago       Running             kube-proxy                1                   2c4fa8e7407b6       kube-proxy-hzcqb
	f7361c6f4bd18       e3db313c6dbc0                                                                                         4 minutes ago       Running             kube-scheduler            1                   c71f803041feb       kube-scheduler-multinode-455300
	49e8b49da16e8       d058aa5ab969c                                                                                         4 minutes ago       Running             kube-controller-manager   1                   4bb4fb569699c       kube-controller-manager-multinode-455300
	e7cfb9be043a7       73deb9a3f7025                                                                                         4 minutes ago       Running             etcd                      0                   81d7001391376       etcd-multinode-455300
	fa1d9a10b2234       7fe0e6f37db33                                                                                         4 minutes ago       Running             kube-apiserver            0                   c4292e51a1ee7       kube-apiserver-multinode-455300
	26363c81c8c2e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   781e00b7789fd       busybox-5bc68d56bd-flvvn
	5944000e150d4       ead0a4a53df89                                                                                         26 minutes ago      Exited              coredns                   0                   58a2f8149f7fd       coredns-5dd5756b68-fj9bd
	5e6fbedb8b41b       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              26 minutes ago      Exited              kindnet-cni               0                   6374d63f48806       kindnet-zxd45
	de1e7a6bed714       83f6cc407eed8                                                                                         27 minutes ago      Exited              kube-proxy                0                   e74bc4380f45a       kube-proxy-hzcqb
	239b6c40fa398       e3db313c6dbc0                                                                                         27 minutes ago      Exited              kube-scheduler            0                   dd32942a97204       kube-scheduler-multinode-455300
	9a1fd87d0726d       d058aa5ab969c                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   2303b2b6305d3       kube-controller-manager-multinode-455300
	
	
	==> coredns [27e300ad48bf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 727859d02e49f226305255353d6ea73d4e25f577656e92efc00f8bdfe7b9e0a41c48e607fb0e54b875432612a89a9ff227ec88b4a4c86d52ce98698e96c5359a
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45211 - 59043 "HINFO IN 6500578579590031979.5100900820948970122. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042388278s
	
	
	==> coredns [5944000e150d] <==
	[INFO] 10.244.0.3:46329 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000275807s
	[INFO] 10.244.0.3:51952 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000257006s
	[INFO] 10.244.0.3:39632 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116603s
	[INFO] 10.244.0.3:39823 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000069001s
	[INFO] 10.244.0.3:40379 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093902s
	[INFO] 10.244.0.3:36378 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086502s
	[INFO] 10.244.0.3:37142 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000179905s
	[INFO] 10.244.1.2:38866 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125403s
	[INFO] 10.244.1.2:55914 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000059302s
	[INFO] 10.244.1.2:34419 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086902s
	[INFO] 10.244.1.2:44856 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047402s
	[INFO] 10.244.0.3:33876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000471212s
	[INFO] 10.244.0.3:46526 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078902s
	[INFO] 10.244.0.3:55356 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000178604s
	[INFO] 10.244.0.3:54826 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.001129029s
	[INFO] 10.244.1.2:53436 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000271206s
	[INFO] 10.244.1.2:44799 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000369109s
	[INFO] 10.244.1.2:35728 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111303s
	[INFO] 10.244.1.2:56657 - 5 "PTR IN 1.176.21.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150804s
	[INFO] 10.244.0.3:58149 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261405s
	[INFO] 10.244.0.3:52594 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000382108s
	[INFO] 10.244.0.3:44384 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090701s
	[INFO] 10.244.0.3:46996 - 5 "PTR IN 1.176.21.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085502s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-455300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-455300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=multinode-455300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_26T22_58_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Dec 2023 22:58:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-455300
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Dec 2023 23:25:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Dec 2023 23:21:12 +0000   Tue, 26 Dec 2023 22:58:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Dec 2023 23:21:12 +0000   Tue, 26 Dec 2023 22:58:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Dec 2023 23:21:12 +0000   Tue, 26 Dec 2023 22:58:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Dec 2023 23:21:12 +0000   Tue, 26 Dec 2023 23:21:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.21.182.57
	  Hostname:    multinode-455300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0e32831334048b78fb130f9239e270d
	  System UUID:                cabade69-24af-5b4b-90ee-9a5f4e38ee27
	  Boot ID:                    421ef816-f6d4-40e7-b16e-c4a43d6b76ec
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-flvvn                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-5dd5756b68-fj9bd                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-455300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m36s
	  kube-system                 kindnet-zxd45                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-455300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-controller-manager-multinode-455300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-hzcqb                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-455300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  Starting                 4m33s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-455300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-455300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-455300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-455300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-455300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-455300 status is now: NodeHasSufficientPID
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-455300 event: Registered Node multinode-455300 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-455300 status is now: NodeReady
	  Normal  Starting                 4m44s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m43s (x8 over 4m44s)  kubelet          Node multinode-455300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s (x8 over 4m44s)  kubelet          Node multinode-455300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s (x7 over 4m44s)  kubelet          Node multinode-455300 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m24s                  node-controller  Node multinode-455300 event: Registered Node multinode-455300 in Controller
	
	
	Name:               multinode-455300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-455300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=multinode-455300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_26T23_23_26_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Dec 2023 23:23:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-455300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Dec 2023 23:25:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Dec 2023 23:23:34 +0000   Tue, 26 Dec 2023 23:23:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Dec 2023 23:23:34 +0000   Tue, 26 Dec 2023 23:23:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Dec 2023 23:23:34 +0000   Tue, 26 Dec 2023 23:23:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Dec 2023 23:23:34 +0000   Tue, 26 Dec 2023 23:23:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.21.184.151
	  Hostname:    multinode-455300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 d0e46123c7d243bebc361d9626123160
	  System UUID:                995771b5-3446-ed4e-9347-b1c6a8c42028
	  Boot ID:                    0636e228-1842-44c3-8342-8050af46c639
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-xz7zz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m15s
	  kube-system                 kindnet-zt55b               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-bqlf8            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 23m                    kube-proxy  
	  Normal  Starting                 2m9s                   kube-proxy  
	  Normal  NodeHasSufficientMemory  24m (x5 over 24m)      kubelet     Node multinode-455300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x5 over 24m)      kubelet     Node multinode-455300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x5 over 24m)      kubelet     Node multinode-455300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                23m                    kubelet     Node multinode-455300-m02 status is now: NodeReady
	  Normal  Starting                 2m12s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m12s (x2 over 2m12s)  kubelet     Node multinode-455300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x2 over 2m12s)  kubelet     Node multinode-455300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x2 over 2m12s)  kubelet     Node multinode-455300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m12s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m2s                   kubelet     Node multinode-455300-m02 status is now: NodeReady
	
	
	Name:               multinode-455300-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-455300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=multinode-455300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_26T23_23_26_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Dec 2023 23:16:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-455300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Dec 2023 23:17:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 26 Dec 2023 23:16:51 +0000   Tue, 26 Dec 2023 23:21:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 26 Dec 2023 23:16:51 +0000   Tue, 26 Dec 2023 23:21:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 26 Dec 2023 23:16:51 +0000   Tue, 26 Dec 2023 23:21:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 26 Dec 2023 23:16:51 +0000   Tue, 26 Dec 2023 23:21:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.21.188.21
	  Hostname:    multinode-455300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 4676fb334975445691cd84a53d0b18d8
	  System UUID:                7328a204-99ed-db46-9697-da6ffa285a6e
	  Boot ID:                    619d1b92-264c-48ba-8a4d-0037b27a4a21
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8jsvj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-2pfcl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 8m55s                  kube-proxy       
	  Normal  Starting                 19m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)      kubelet          Node multinode-455300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)      kubelet          Node multinode-455300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)      kubelet          Node multinode-455300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                19m                    kubelet          Node multinode-455300-m03 status is now: NodeReady
	  Normal  Starting                 8m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m58s (x2 over 8m58s)  kubelet          Node multinode-455300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m58s (x2 over 8m58s)  kubelet          Node multinode-455300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m58s (x2 over 8m58s)  kubelet          Node multinode-455300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m45s                  kubelet          Node multinode-455300-m03 status is now: NodeReady
	  Normal  RegisteredNode           4m24s                  node-controller  Node multinode-455300-m03 event: Registered Node multinode-455300-m03 in Controller
	  Normal  NodeNotReady             3m44s                  node-controller  Node multinode-455300-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	              on the kernel command line
	[  +0.000345] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.354755] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.128501] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.212573] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000004] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +8.035585] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec26 23:20] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.166803] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[ +26.466225] systemd-fstab-generator[966]: Ignoring "noauto" for root device
	[  +0.624405] systemd-fstab-generator[1008]: Ignoring "noauto" for root device
	[  +0.179568] systemd-fstab-generator[1019]: Ignoring "noauto" for root device
	[  +0.209548] systemd-fstab-generator[1032]: Ignoring "noauto" for root device
	[  +1.513818] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.411138] systemd-fstab-generator[1207]: Ignoring "noauto" for root device
	[  +0.188043] systemd-fstab-generator[1218]: Ignoring "noauto" for root device
	[  +0.176942] systemd-fstab-generator[1229]: Ignoring "noauto" for root device
	[  +0.183564] systemd-fstab-generator[1240]: Ignoring "noauto" for root device
	[  +0.226357] systemd-fstab-generator[1254]: Ignoring "noauto" for root device
	[  +4.429702] systemd-fstab-generator[1474]: Ignoring "noauto" for root device
	[  +0.902974] kauditd_printk_skb: 29 callbacks suppressed
	[Dec26 23:21] kauditd_printk_skb: 18 callbacks suppressed
	[Dec26 23:25] hrtimer: interrupt took 1279809 ns
	
	
	==> etcd [e7cfb9be043a] <==
	{"level":"info","ts":"2023-12-26T23:20:55.779466Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"af5a9070d9ef2513","local-member-id":"897479b6d7267bc0","added-peer-id":"897479b6d7267bc0","added-peer-peer-urls":["https://172.21.184.4:2380"]}
	{"level":"info","ts":"2023-12-26T23:20:55.779646Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"af5a9070d9ef2513","local-member-id":"897479b6d7267bc0","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-26T23:20:55.78094Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-26T23:20:55.78237Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-26T23:20:55.785505Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"897479b6d7267bc0","initial-advertise-peer-urls":["https://172.21.182.57:2380"],"listen-peer-urls":["https://172.21.182.57:2380"],"advertise-client-urls":["https://172.21.182.57:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.21.182.57:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-26T23:20:55.785686Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-26T23:20:55.783339Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.21.182.57:2380"}
	{"level":"info","ts":"2023-12-26T23:20:55.786427Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.21.182.57:2380"}
	{"level":"info","ts":"2023-12-26T23:20:55.780393Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-26T23:20:55.786508Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-26T23:20:55.78654Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-26T23:20:57.623159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"897479b6d7267bc0 is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-26T23:20:57.623255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"897479b6d7267bc0 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-26T23:20:57.623355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"897479b6d7267bc0 received MsgPreVoteResp from 897479b6d7267bc0 at term 2"}
	{"level":"info","ts":"2023-12-26T23:20:57.623609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"897479b6d7267bc0 became candidate at term 3"}
	{"level":"info","ts":"2023-12-26T23:20:57.623816Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"897479b6d7267bc0 received MsgVoteResp from 897479b6d7267bc0 at term 3"}
	{"level":"info","ts":"2023-12-26T23:20:57.624012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"897479b6d7267bc0 became leader at term 3"}
	{"level":"info","ts":"2023-12-26T23:20:57.624307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 897479b6d7267bc0 elected leader 897479b6d7267bc0 at term 3"}
	{"level":"info","ts":"2023-12-26T23:20:57.631313Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"897479b6d7267bc0","local-member-attributes":"{Name:multinode-455300 ClientURLs:[https://172.21.182.57:2379]}","request-path":"/0/members/897479b6d7267bc0/attributes","cluster-id":"af5a9070d9ef2513","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-26T23:20:57.631744Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-26T23:20:57.63226Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-26T23:20:57.633899Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-26T23:20:57.636417Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.21.182.57:2379"}
	{"level":"info","ts":"2023-12-26T23:20:57.633941Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-26T23:20:57.649085Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:25:37 up 6 min,  0 users,  load average: 0.25, 0.38, 0.19
	Linux multinode-455300 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [5e6fbedb8b41] <==
	I1226 23:16:53.882032       1 main.go:250] Node multinode-455300-m03 has CIDR [10.244.3.0/24] 
	I1226 23:17:03.890280       1 main.go:223] Handling node with IPs: map[172.21.184.4:{}]
	I1226 23:17:03.890442       1 main.go:227] handling current node
	I1226 23:17:03.890459       1 main.go:223] Handling node with IPs: map[172.21.187.58:{}]
	I1226 23:17:03.890488       1 main.go:250] Node multinode-455300-m02 has CIDR [10.244.1.0/24] 
	I1226 23:17:03.890880       1 main.go:223] Handling node with IPs: map[172.21.188.21:{}]
	I1226 23:17:03.890897       1 main.go:250] Node multinode-455300-m03 has CIDR [10.244.3.0/24] 
	I1226 23:17:13.906873       1 main.go:223] Handling node with IPs: map[172.21.184.4:{}]
	I1226 23:17:13.906935       1 main.go:227] handling current node
	I1226 23:17:13.906948       1 main.go:223] Handling node with IPs: map[172.21.187.58:{}]
	I1226 23:17:13.906955       1 main.go:250] Node multinode-455300-m02 has CIDR [10.244.1.0/24] 
	I1226 23:17:13.907201       1 main.go:223] Handling node with IPs: map[172.21.188.21:{}]
	I1226 23:17:13.907236       1 main.go:250] Node multinode-455300-m03 has CIDR [10.244.3.0/24] 
	I1226 23:17:23.915148       1 main.go:223] Handling node with IPs: map[172.21.184.4:{}]
	I1226 23:17:23.915195       1 main.go:227] handling current node
	I1226 23:17:23.915209       1 main.go:223] Handling node with IPs: map[172.21.187.58:{}]
	I1226 23:17:23.915219       1 main.go:250] Node multinode-455300-m02 has CIDR [10.244.1.0/24] 
	I1226 23:17:23.915462       1 main.go:223] Handling node with IPs: map[172.21.188.21:{}]
	I1226 23:17:23.915495       1 main.go:250] Node multinode-455300-m03 has CIDR [10.244.3.0/24] 
	I1226 23:17:33.933358       1 main.go:223] Handling node with IPs: map[172.21.184.4:{}]
	I1226 23:17:33.933471       1 main.go:227] handling current node
	I1226 23:17:33.933754       1 main.go:223] Handling node with IPs: map[172.21.187.58:{}]
	I1226 23:17:33.933891       1 main.go:250] Node multinode-455300-m02 has CIDR [10.244.1.0/24] 
	I1226 23:17:33.934088       1 main.go:223] Handling node with IPs: map[172.21.188.21:{}]
	I1226 23:17:33.934183       1 main.go:250] Node multinode-455300-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ec9e30846687] <==
	I1226 23:24:48.456786       1 main.go:250] Node multinode-455300-m03 has CIDR [10.244.3.0/24] 
	I1226 23:24:58.464808       1 main.go:223] Handling node with IPs: map[172.21.182.57:{}]
	I1226 23:24:58.465166       1 main.go:227] handling current node
	I1226 23:24:58.465183       1 main.go:223] Handling node with IPs: map[172.21.184.151:{}]
	I1226 23:24:58.465194       1 main.go:250] Node multinode-455300-m02 has CIDR [10.244.1.0/24] 
	I1226 23:24:58.465637       1 main.go:223] Handling node with IPs: map[172.21.188.21:{}]
	I1226 23:24:58.465758       1 main.go:250] Node multinode-455300-m03 has CIDR [10.244.3.0/24] 
	I1226 23:25:08.482545       1 main.go:223] Handling node with IPs: map[172.21.182.57:{}]
	I1226 23:25:08.482932       1 main.go:227] handling current node
	I1226 23:25:08.483186       1 main.go:223] Handling node with IPs: map[172.21.184.151:{}]
	I1226 23:25:08.483371       1 main.go:250] Node multinode-455300-m02 has CIDR [10.244.1.0/24] 
	I1226 23:25:08.483630       1 main.go:223] Handling node with IPs: map[172.21.188.21:{}]
	I1226 23:25:08.483666       1 main.go:250] Node multinode-455300-m03 has CIDR [10.244.3.0/24] 
	I1226 23:25:18.504068       1 main.go:223] Handling node with IPs: map[172.21.182.57:{}]
	I1226 23:25:18.504116       1 main.go:227] handling current node
	I1226 23:25:18.504132       1 main.go:223] Handling node with IPs: map[172.21.184.151:{}]
	I1226 23:25:18.504139       1 main.go:250] Node multinode-455300-m02 has CIDR [10.244.1.0/24] 
	I1226 23:25:18.504728       1 main.go:223] Handling node with IPs: map[172.21.188.21:{}]
	I1226 23:25:18.504866       1 main.go:250] Node multinode-455300-m03 has CIDR [10.244.3.0/24] 
	I1226 23:25:28.520637       1 main.go:223] Handling node with IPs: map[172.21.182.57:{}]
	I1226 23:25:28.520752       1 main.go:227] handling current node
	I1226 23:25:28.520767       1 main.go:223] Handling node with IPs: map[172.21.184.151:{}]
	I1226 23:25:28.520775       1 main.go:250] Node multinode-455300-m02 has CIDR [10.244.1.0/24] 
	I1226 23:25:28.521815       1 main.go:223] Handling node with IPs: map[172.21.188.21:{}]
	I1226 23:25:28.521914       1 main.go:250] Node multinode-455300-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [fa1d9a10b223] <==
	I1226 23:20:59.652104       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1226 23:20:59.653027       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1226 23:20:59.653129       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1226 23:20:59.785242       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1226 23:20:59.801833       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1226 23:20:59.803592       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1226 23:20:59.803632       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1226 23:20:59.803647       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1226 23:20:59.803670       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1226 23:20:59.812520       1 shared_informer.go:318] Caches are synced for configmaps
	I1226 23:20:59.852432       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1226 23:20:59.852668       1 aggregator.go:166] initial CRD sync complete...
	I1226 23:20:59.852720       1 autoregister_controller.go:141] Starting autoregister controller
	I1226 23:20:59.852739       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1226 23:20:59.852759       1 cache.go:39] Caches are synced for autoregister controller
	I1226 23:20:59.858355       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1226 23:21:00.611164       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1226 23:21:01.433226       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.21.182.57]
	I1226 23:21:01.435000       1 controller.go:624] quota admission added evaluator for: endpoints
	I1226 23:21:01.450454       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1226 23:21:03.850156       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1226 23:21:04.080795       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1226 23:21:04.095595       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1226 23:21:04.244139       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1226 23:21:04.264700       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [49e8b49da16e] <==
	I1226 23:21:52.987207       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="41.000033ms"
	I1226 23:21:52.989930       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="47.5µs"
	I1226 23:21:52.990884       1 event.go:307] "Event occurred" object="kube-system/kindnet-zt55b" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1226 23:21:53.012717       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-2pfcl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1226 23:21:53.021212       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-bqlf8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1226 23:23:21.026164       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-xz7zz"
	I1226 23:23:21.046566       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="44.998259ms"
	I1226 23:23:21.067113       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="20.429881ms"
	I1226 23:23:21.088077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="20.897681ms"
	I1226 23:23:21.088465       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="78.5µs"
	I1226 23:23:24.599048       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-bskhd" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-bskhd"
	I1226 23:23:24.599363       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-455300-m02\" does not exist"
	I1226 23:23:24.631749       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-455300-m02" podCIDRs=["10.244.1.0/24"]
	I1226 23:23:25.456642       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="91.2µs"
	I1226 23:23:34.781564       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-455300-m02"
	I1226 23:23:34.817393       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="57.9µs"
	I1226 23:23:37.581079       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="191.999µs"
	I1226 23:23:37.593231       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="106.1µs"
	I1226 23:23:37.623179       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="169µs"
	I1226 23:23:38.001116       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="62.7µs"
	I1226 23:23:38.020473       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="81.2µs"
	I1226 23:23:38.021081       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-bskhd" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-bskhd"
	I1226 23:23:38.072066       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-xz7zz" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-xz7zz"
	I1226 23:23:40.114388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.403393ms"
	I1226 23:23:40.115208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="146.2µs"
	
	
	==> kube-controller-manager [9a1fd87d0726] <==
	I1226 23:02:21.091416       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="37.47224ms"
	I1226 23:02:21.091483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="33.901µs"
	I1226 23:02:21.111188       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="280.208µs"
	I1226 23:02:21.128941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="74.502µs"
	I1226 23:02:24.191648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="13.250081ms"
	I1226 23:02:24.192309       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="32.801µs"
	I1226 23:02:24.485337       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="28.424118ms"
	I1226 23:02:24.486046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="112.004µs"
	I1226 23:06:10.505462       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-455300-m02"
	I1226 23:06:10.507407       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-455300-m03\" does not exist"
	I1226 23:06:10.536493       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-455300-m03" podCIDRs=["10.244.2.0/24"]
	I1226 23:06:10.546727       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8jsvj"
	I1226 23:06:10.548400       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2pfcl"
	I1226 23:06:13.799351       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-455300-m03"
	I1226 23:06:13.799366       1 event.go:307] "Event occurred" object="multinode-455300-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-455300-m03 event: Registered Node multinode-455300-m03 in Controller"
	I1226 23:06:31.678666       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-455300-m02"
	I1226 23:14:13.945774       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-455300-m02"
	I1226 23:14:13.947640       1 event.go:307] "Event occurred" object="multinode-455300-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-455300-m03 status is now: NodeNotReady"
	I1226 23:14:13.967999       1 event.go:307] "Event occurred" object="kube-system/kindnet-8jsvj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1226 23:14:13.997412       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-2pfcl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1226 23:16:37.504888       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-455300-m02"
	I1226 23:16:38.840769       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-455300-m02"
	I1226 23:16:38.843099       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-455300-m03\" does not exist"
	I1226 23:16:38.861019       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-455300-m03" podCIDRs=["10.244.3.0/24"]
	I1226 23:16:51.283770       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-455300-m02"
	
	
	==> kube-proxy [0d485f58415f] <==
	I1226 23:21:03.478884       1 server_others.go:69] "Using iptables proxy"
	I1226 23:21:03.550735       1 node.go:141] Successfully retrieved node IP: 172.21.182.57
	I1226 23:21:03.717358       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1226 23:21:03.717410       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1226 23:21:03.733160       1 server_others.go:152] "Using iptables Proxier"
	I1226 23:21:03.733623       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1226 23:21:03.734777       1 server.go:846] "Version info" version="v1.28.4"
	I1226 23:21:03.734996       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1226 23:21:03.737954       1 config.go:188] "Starting service config controller"
	I1226 23:21:03.738601       1 config.go:97] "Starting endpoint slice config controller"
	I1226 23:21:03.740299       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1226 23:21:03.740388       1 config.go:315] "Starting node config controller"
	I1226 23:21:03.740401       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1226 23:21:03.740612       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1226 23:21:03.840442       1 shared_informer.go:318] Caches are synced for service config
	I1226 23:21:03.840533       1 shared_informer.go:318] Caches are synced for node config
	I1226 23:21:03.841637       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [de1e7a6bed71] <==
	I1226 22:58:30.910121       1 server_others.go:69] "Using iptables proxy"
	I1226 22:58:30.925166       1 node.go:141] Successfully retrieved node IP: 172.21.184.4
	I1226 22:58:30.980870       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1226 22:58:30.981024       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1226 22:58:30.985760       1 server_others.go:152] "Using iptables Proxier"
	I1226 22:58:30.986256       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1226 22:58:30.987130       1 server.go:846] "Version info" version="v1.28.4"
	I1226 22:58:30.987357       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1226 22:58:30.989326       1 config.go:188] "Starting service config controller"
	I1226 22:58:30.989433       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1226 22:58:30.989835       1 config.go:97] "Starting endpoint slice config controller"
	I1226 22:58:30.989865       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1226 22:58:30.993636       1 config.go:315] "Starting node config controller"
	I1226 22:58:30.993653       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1226 22:58:31.090153       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1226 22:58:31.090220       1 shared_informer.go:318] Caches are synced for service config
	I1226 22:58:31.094863       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [239b6c40fa39] <==
	E1226 22:58:13.621495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1226 22:58:13.720559       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1226 22:58:13.720681       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1226 22:58:13.761277       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1226 22:58:13.761414       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1226 22:58:13.814126       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1226 22:58:13.814406       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1226 22:58:13.815013       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1226 22:58:13.815313       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1226 22:58:13.913876       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1226 22:58:13.913913       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1226 22:58:13.947103       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1226 22:58:13.947256       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1226 22:58:13.973770       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1226 22:58:13.973856       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1226 22:58:13.988228       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1226 22:58:13.988370       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1226 22:58:14.058498       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1226 22:58:14.058642       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1226 22:58:14.126846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1226 22:58:14.126942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1226 22:58:16.909733       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1226 23:17:43.557957       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1226 23:17:43.558072       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E1226 23:17:43.558269       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f7361c6f4bd1] <==
	I1226 23:20:56.823615       1 serving.go:348] Generated self-signed cert in-memory
	W1226 23:20:59.696812       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1226 23:20:59.697256       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1226 23:20:59.697418       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1226 23:20:59.697682       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1226 23:20:59.808265       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1226 23:20:59.808506       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1226 23:20:59.813200       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1226 23:20:59.814061       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1226 23:20:59.814403       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1226 23:20:59.816441       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1226 23:20:59.917332       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-26 23:19:21 UTC, ends at Tue 2023-12-26 23:25:37 UTC. --
	Dec 26 23:21:12 multinode-455300 kubelet[1480]: I1226 23:21:12.658075    1480 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 26 23:21:17 multinode-455300 kubelet[1480]: I1226 23:21:17.652948    1480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="597df0e084d00a4a1514de455d5964afe2fb76bf2b0c0e1b268d18dc56917b75"
	Dec 26 23:21:17 multinode-455300 kubelet[1480]: I1226 23:21:17.831924    1480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f39c07ef44374ee7585169e028e45172f3a9cd26c12a572d70a11031b48a5c05"
	Dec 26 23:21:34 multinode-455300 kubelet[1480]: I1226 23:21:34.191640    1480 scope.go:117] "RemoveContainer" containerID="c49ce5a6098832229de0ff1f891d4a31529213fbd82c5939c2f9dbf50be4b97d"
	Dec 26 23:21:34 multinode-455300 kubelet[1480]: I1226 23:21:34.192299    1480 scope.go:117] "RemoveContainer" containerID="8649a5e0dffb84da6aa50e3b5b5290c0b974c6383c5e7105e630994dc7813a7e"
	Dec 26 23:21:34 multinode-455300 kubelet[1480]: E1226 23:21:34.192522    1480 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e274f19d-1940-400d-b887-aaf390e64fdd)\"" pod="kube-system/storage-provisioner" podUID="e274f19d-1940-400d-b887-aaf390e64fdd"
	Dec 26 23:21:46 multinode-455300 kubelet[1480]: I1226 23:21:46.693317    1480 scope.go:117] "RemoveContainer" containerID="8649a5e0dffb84da6aa50e3b5b5290c0b974c6383c5e7105e630994dc7813a7e"
	Dec 26 23:21:52 multinode-455300 kubelet[1480]: I1226 23:21:52.719831    1480 scope.go:117] "RemoveContainer" containerID="0d2ca397ea4bdb1ddc7047352e9fd7fa1bc5a85c9a41ee6070f71efa834fe3bc"
	Dec 26 23:21:52 multinode-455300 kubelet[1480]: I1226 23:21:52.787345    1480 scope.go:117] "RemoveContainer" containerID="2c33bdd1003a57081d900aa7690746f3b9bbd4507d4b77e6df030ee0d5a9f8ca"
	Dec 26 23:21:52 multinode-455300 kubelet[1480]: E1226 23:21:52.826440    1480 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 26 23:21:52 multinode-455300 kubelet[1480]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 26 23:21:52 multinode-455300 kubelet[1480]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 26 23:21:52 multinode-455300 kubelet[1480]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 26 23:22:52 multinode-455300 kubelet[1480]: E1226 23:22:52.822792    1480 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 26 23:22:52 multinode-455300 kubelet[1480]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 26 23:22:52 multinode-455300 kubelet[1480]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 26 23:22:52 multinode-455300 kubelet[1480]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 26 23:23:52 multinode-455300 kubelet[1480]: E1226 23:23:52.822532    1480 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 26 23:23:52 multinode-455300 kubelet[1480]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 26 23:23:52 multinode-455300 kubelet[1480]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 26 23:23:52 multinode-455300 kubelet[1480]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 26 23:24:52 multinode-455300 kubelet[1480]: E1226 23:24:52.821126    1480 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 26 23:24:52 multinode-455300 kubelet[1480]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 26 23:24:52 multinode-455300 kubelet[1480]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 26 23:24:52 multinode-455300 kubelet[1480]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 23:25:28.473116    6504 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-455300 -n multinode-455300
E1226 23:25:48.815735   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-455300 -n multinode-455300: (12.3811765s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-455300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (502.69s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (645.38s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.998748161.exe start -p running-upgrade-923100 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:133: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.998748161.exe start -p running-upgrade-923100 --memory=2200 --vm-driver=hyperv: (4m37.1175893s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-923100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E1226 23:51:05.441053   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p running-upgrade-923100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (4m42.2824792s)

                                                
                                                
-- stdout --
	* [running-upgrade-923100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the hyperv driver based on existing profile
	* Starting control plane node running-upgrade-923100 in cluster running-upgrade-923100
	* Updating the running hyperv "running-upgrade-923100" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 23:50:59.775359    6784 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1226 23:50:59.857766    6784 out.go:296] Setting OutFile to fd 1580 ...
	I1226 23:50:59.858764    6784 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 23:50:59.858764    6784 out.go:309] Setting ErrFile to fd 1816...
	I1226 23:50:59.858764    6784 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 23:50:59.889979    6784 out.go:303] Setting JSON to false
	I1226 23:50:59.895985    6784 start.go:128] hostinfo: {"hostname":"minikube1","uptime":9058,"bootTime":1703625601,"procs":209,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1226 23:50:59.895985    6784 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 23:50:59.899976    6784 out.go:177] * [running-upgrade-923100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1226 23:50:59.905973    6784 notify.go:220] Checking for updates...
	I1226 23:50:59.908985    6784 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:50:59.912985    6784 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 23:50:59.915976    6784 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1226 23:50:59.917976    6784 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 23:50:59.920976    6784 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 23:50:59.923990    6784 config.go:182] Loaded profile config "running-upgrade-923100": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1226 23:50:59.923990    6784 start_flags.go:694] config upgrade: Driver=hyperv
	I1226 23:50:59.923990    6784 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I1226 23:50:59.923990    6784 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\running-upgrade-923100\config.json ...
	I1226 23:50:59.931991    6784 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1226 23:50:59.935011    6784 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 23:51:06.454951    6784 out.go:177] * Using the hyperv driver based on existing profile
	I1226 23:51:06.717040    6784 start.go:298] selected driver: hyperv
	I1226 23:51:06.717842    6784 start.go:902] validating driver "hyperv" against &{Name:running-upgrade-923100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0
ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.21.179.140 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1226 23:51:06.717971    6784 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 23:51:06.776626    6784 cni.go:84] Creating CNI manager for ""
	I1226 23:51:06.777631    6784 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1226 23:51:06.777631    6784 start_flags.go:323] config:
	{Name:running-upgrade-923100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.21.179.140 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1226 23:51:06.777631    6784 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:51:06.961846    6784 out.go:177] * Starting control plane node running-upgrade-923100 in cluster running-upgrade-923100
	I1226 23:51:07.111582    6784 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W1226 23:51:07.166568    6784 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1226 23:51:07.166864    6784 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\running-upgrade-923100\config.json ...
	I1226 23:51:07.166864    6784 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1226 23:51:07.167069    6784 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I1226 23:51:07.167069    6784 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I1226 23:51:07.167172    6784 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	I1226 23:51:07.166864    6784 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I1226 23:51:07.166864    6784 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I1226 23:51:07.167037    6784 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I1226 23:51:07.167172    6784 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I1226 23:51:07.170463    6784 start.go:365] acquiring machines lock for running-upgrade-923100: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1226 23:51:07.373006    6784 cache.go:107] acquiring lock: {Name:mk4e8ee16ba5b475b341c78282e92381b8584a70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:51:07.373006    6784 cache.go:107] acquiring lock: {Name:mkbbc88bc55edd0ef8bd1c53673fe74e0129caa1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:51:07.373006    6784 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1226 23:51:07.373006    6784 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1226 23:51:07.377006    6784 cache.go:107] acquiring lock: {Name:mkcd99a49ef11cbbf53d95904dadb7eadb7e30f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:51:07.377006    6784 cache.go:107] acquiring lock: {Name:mk69342e4f48cfcf5669830048d73215a892bfa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:51:07.377006    6784 cache.go:107] acquiring lock: {Name:mk7a50c4bf2c20bec1fff9de3ac74780139c1c4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:51:07.377996    6784 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1226 23:51:07.377996    6784 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1226 23:51:07.377996    6784 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1226 23:51:07.382983    6784 cache.go:107] acquiring lock: {Name:mka7be082bbc64a256cc388eda31b6c9edba386f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:51:07.382983    6784 cache.go:107] acquiring lock: {Name:mkf253ced278c18e0b579f9f5e07f6a2fe7db678 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:51:07.382983    6784 cache.go:107] acquiring lock: {Name:mk67b634fe9a890edc5195da54a2f3093e0c8f30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:51:07.382983    6784 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1226 23:51:07.383999    6784 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1226 23:51:07.383999    6784 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1226 23:51:07.383999    6784 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 216.9627ms
	I1226 23:51:07.383999    6784 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1226 23:51:07.389981    6784 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1226 23:51:07.390998    6784 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1226 23:51:07.395998    6784 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1226 23:51:07.398029    6784 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1226 23:51:07.401992    6784 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1226 23:51:07.426037    6784 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1226 23:51:07.431060    6784 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	W1226 23:51:07.517273    6784 image.go:187] authn lookup for registry.k8s.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1226 23:51:07.628614    6784 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1226 23:51:07.732847    6784 image.go:187] authn lookup for registry.k8s.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1226 23:51:07.850438    6784 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1226 23:51:07.954019    6784 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1226 23:51:08.043570    6784 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I1226 23:51:08.055822    6784 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	W1226 23:51:08.068191    6784 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1226 23:51:08.106168    6784 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I1226 23:51:08.107163    6784 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	W1226 23:51:08.169450    6784 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1226 23:51:08.255215    6784 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 exists
	I1226 23:51:08.255215    6784 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.1" took 1.0874359s
	I1226 23:51:08.255215    6784 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 succeeded
	I1226 23:51:08.293444    6784 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	I1226 23:51:08.329705    6784 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I1226 23:51:08.453760    6784 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I1226 23:51:09.080646    6784 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 exists
	I1226 23:51:09.080646    6784 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns_1.6.5" took 1.9132377s
	I1226 23:51:09.080646    6784 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 succeeded
	I1226 23:51:09.174323    6784 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 exists
	I1226 23:51:09.174323    6784 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.17.0" took 2.0065448s
	I1226 23:51:09.175329    6784 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 succeeded
	I1226 23:51:09.417483    6784 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 exists
	I1226 23:51:09.417483    6784 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.17.0" took 2.2498307s
	I1226 23:51:09.417483    6784 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 succeeded
	I1226 23:51:09.448424    6784 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 exists
	I1226 23:51:09.448846    6784 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.17.0" took 2.2815234s
	I1226 23:51:09.448949    6784 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 succeeded
	I1226 23:51:09.873411    6784 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 exists
	I1226 23:51:09.908556    6784 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.17.0" took 2.741455s
	I1226 23:51:09.908655    6784 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 succeeded
	I1226 23:51:10.137649    6784 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 exists
	I1226 23:51:10.137871    6784 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.4.3-0" took 2.9700927s
	I1226 23:51:10.137871    6784 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 succeeded
	I1226 23:51:10.138002    6784 cache.go:87] Successfully saved all images to host disk.
	I1226 23:54:13.285327    6784 start.go:369] acquired machines lock for "running-upgrade-923100" in 3m6.1149317s
	I1226 23:54:13.285327    6784 start.go:96] Skipping create...Using existing machine configuration
	I1226 23:54:13.285327    6784 fix.go:54] fixHost starting: minikube
	I1226 23:54:13.286561    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-923100 ).state
	I1226 23:54:15.660075    6784 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:54:15.660165    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:54:15.660267    6784 fix.go:102] recreateIfNeeded on running-upgrade-923100: state=Running err=<nil>
	W1226 23:54:15.660267    6784 fix.go:128] unexpected machine state, will restart: <nil>
	I1226 23:54:15.663527    6784 out.go:177] * Updating the running hyperv "running-upgrade-923100" VM ...
	I1226 23:54:15.666548    6784 machine.go:88] provisioning docker machine ...
	I1226 23:54:15.667538    6784 buildroot.go:166] provisioning hostname "running-upgrade-923100"
	I1226 23:54:15.667538    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-923100 ).state
	I1226 23:54:18.028116    6784 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:54:18.028116    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:54:18.028334    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-923100 ).networkadapters[0]).ipaddresses[0]
	I1226 23:54:20.968679    6784 main.go:141] libmachine: [stdout =====>] : 172.21.179.140
	
	I1226 23:54:20.968679    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:54:20.978689    6784 main.go:141] libmachine: Using SSH client type: native
	I1226 23:54:20.979706    6784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.179.140 22 <nil> <nil>}
	I1226 23:54:20.979706    6784 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-923100 && echo "running-upgrade-923100" | sudo tee /etc/hostname
	I1226 23:54:21.157486    6784 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-923100
	
	I1226 23:54:21.157618    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-923100 ).state
	I1226 23:54:23.590763    6784 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:54:23.591159    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:54:23.591290    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-923100 ).networkadapters[0]).ipaddresses[0]
	I1226 23:54:26.281007    6784 main.go:141] libmachine: [stdout =====>] : 172.21.179.140
	
	I1226 23:54:26.281281    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:54:26.286783    6784 main.go:141] libmachine: Using SSH client type: native
	I1226 23:54:26.287563    6784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.179.140 22 <nil> <nil>}
	I1226 23:54:26.287646    6784 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-923100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-923100/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-923100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 23:54:26.444816    6784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 23:54:26.444875    6784 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1226 23:54:26.444967    6784 buildroot.go:174] setting up certificates
	I1226 23:54:26.444967    6784 provision.go:83] configureAuth start
	I1226 23:54:26.444967    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-923100 ).state
	I1226 23:54:28.794307    6784 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:54:28.794307    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:54:28.794417    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-923100 ).networkadapters[0]).ipaddresses[0]
	I1226 23:54:31.824561    6784 main.go:141] libmachine: [stdout =====>] : 172.21.179.140
	
	I1226 23:54:31.824680    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:54:31.824680    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-923100 ).state
	I1226 23:54:34.096461    6784 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:54:34.096714    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:54:34.096929    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-923100 ).networkadapters[0]).ipaddresses[0]
	I1226 23:54:36.852753    6784 main.go:141] libmachine: [stdout =====>] : 172.21.179.140
	
	I1226 23:54:36.852753    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:54:36.852857    6784 provision.go:138] copyHostCerts
	I1226 23:54:36.853469    6784 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1226 23:54:36.853469    6784 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1226 23:54:36.853635    6784 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1226 23:54:36.855145    6784 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1226 23:54:36.855145    6784 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1226 23:54:36.855145    6784 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1226 23:54:36.857296    6784 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1226 23:54:36.857296    6784 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1226 23:54:36.857788    6784 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I1226 23:54:36.858847    6784 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.running-upgrade-923100 san=[172.21.179.140 172.21.179.140 localhost 127.0.0.1 minikube running-upgrade-923100]
	I1226 23:54:37.229810    6784 provision.go:172] copyRemoteCerts
	I1226 23:54:37.245120    6784 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 23:54:37.245120    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-923100 ).state
	I1226 23:54:39.512906    6784 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:54:39.512906    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:54:39.513019    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-923100 ).networkadapters[0]).ipaddresses[0]
	I1226 23:54:42.300970    6784 main.go:141] libmachine: [stdout =====>] : 172.21.179.140
	
	I1226 23:54:42.300970    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:54:42.301445    6784 sshutil.go:53] new ssh client: &{IP:172.21.179.140 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-923100\id_rsa Username:docker}
	I1226 23:54:42.420460    6784 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1753422s)
	I1226 23:54:42.420460    6784 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1226 23:54:42.461521    6784 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 23:54:42.512336    6784 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1176 bytes)
	I1226 23:54:42.538180    6784 provision.go:86] duration metric: configureAuth took 16.0932186s
	I1226 23:54:42.538180    6784 buildroot.go:189] setting minikube options for container-runtime
	I1226 23:54:42.539156    6784 config.go:182] Loaded profile config "running-upgrade-923100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1226 23:54:42.539156    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-923100 ).state
	I1226 23:54:44.892324    6784 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:54:44.892324    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:54:44.892448    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-923100 ).networkadapters[0]).ipaddresses[0]
	I1226 23:54:47.599260    6784 main.go:141] libmachine: [stdout =====>] : 172.21.179.140
	
	I1226 23:54:47.599260    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:54:47.607142    6784 main.go:141] libmachine: Using SSH client type: native
	I1226 23:54:47.608076    6784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.179.140 22 <nil> <nil>}
	I1226 23:54:47.608076    6784 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1226 23:54:47.766720    6784 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1226 23:54:47.766720    6784 buildroot.go:70] root file system type: tmpfs
	I1226 23:54:47.766720    6784 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1226 23:54:47.767281    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-923100 ).state
	I1226 23:54:49.990493    6784 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:54:49.990493    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:54:49.990493    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-923100 ).networkadapters[0]).ipaddresses[0]
	I1226 23:54:52.825284    6784 main.go:141] libmachine: [stdout =====>] : 172.21.179.140
	
	I1226 23:54:52.825284    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:54:52.833617    6784 main.go:141] libmachine: Using SSH client type: native
	I1226 23:54:52.834288    6784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.179.140 22 <nil> <nil>}
	I1226 23:54:52.834407    6784 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1226 23:54:53.004564    6784 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1226 23:54:53.004564    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-923100 ).state
	I1226 23:54:55.268807    6784 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:54:55.268807    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:54:55.268807    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-923100 ).networkadapters[0]).ipaddresses[0]
	I1226 23:54:57.995093    6784 main.go:141] libmachine: [stdout =====>] : 172.21.179.140
	
	I1226 23:54:57.995172    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:54:58.003688    6784 main.go:141] libmachine: Using SSH client type: native
	I1226 23:54:58.004568    6784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.179.140 22 <nil> <nil>}
	I1226 23:54:58.004568    6784 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1226 23:55:12.452123    6784 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
	+++ /lib/systemd/system/docker.service.new
	@@ -3,9 +3,12 @@
	 Documentation=https://docs.docker.com
	 After=network.target  minikube-automount.service docker.socket
	 Requires= minikube-automount.service docker.socket 
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -21,7 +24,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1226 23:55:12.452240    6784 machine.go:91] provisioned docker machine in 56.7857153s
	I1226 23:55:12.452240    6784 start.go:300] post-start starting for "running-upgrade-923100" (driver="hyperv")
	I1226 23:55:12.452378    6784 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 23:55:12.469214    6784 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 23:55:12.469214    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-923100 ).state
	I1226 23:55:14.713266    6784 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:55:14.713266    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:55:14.713397    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-923100 ).networkadapters[0]).ipaddresses[0]
	I1226 23:55:17.406141    6784 main.go:141] libmachine: [stdout =====>] : 172.21.179.140
	
	I1226 23:55:17.406366    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:55:17.406596    6784 sshutil.go:53] new ssh client: &{IP:172.21.179.140 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-923100\id_rsa Username:docker}
	I1226 23:55:17.521856    6784 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0524858s)
	I1226 23:55:17.535622    6784 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 23:55:17.542373    6784 info.go:137] Remote host: Buildroot 2019.02.7
	I1226 23:55:17.542684    6784 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1226 23:55:17.543324    6784 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1226 23:55:17.544741    6784 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> 107282.pem in /etc/ssl/certs
	I1226 23:55:17.558488    6784 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 23:55:17.567946    6784 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /etc/ssl/certs/107282.pem (1708 bytes)
	I1226 23:55:17.588433    6784 start.go:303] post-start completed in 5.136012s
	I1226 23:55:17.588506    6784 fix.go:56] fixHost completed within 1m4.3032052s
	I1226 23:55:17.588641    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-923100 ).state
	I1226 23:55:19.871163    6784 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:55:19.873159    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:55:19.873159    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-923100 ).networkadapters[0]).ipaddresses[0]
	I1226 23:55:22.774755    6784 main.go:141] libmachine: [stdout =====>] : 172.21.179.140
	
	I1226 23:55:22.774934    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:55:22.783091    6784 main.go:141] libmachine: Using SSH client type: native
	I1226 23:55:22.783754    6784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.179.140 22 <nil> <nil>}
	I1226 23:55:22.783754    6784 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1226 23:55:22.940148    6784 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703634922.927671603
	
	I1226 23:55:22.940349    6784 fix.go:206] guest clock: 1703634922.927671603
	I1226 23:55:22.940583    6784 fix.go:219] Guest: 2023-12-26 23:55:22.927671603 +0000 UTC Remote: 2023-12-26 23:55:17.5885068 +0000 UTC m=+257.920013701 (delta=5.339164803s)
	I1226 23:55:22.940929    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-923100 ).state
	I1226 23:55:25.228052    6784 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:55:25.228421    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:55:25.228421    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-923100 ).networkadapters[0]).ipaddresses[0]
	I1226 23:55:27.906070    6784 main.go:141] libmachine: [stdout =====>] : 172.21.179.140
	
	I1226 23:55:27.906070    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:55:27.912270    6784 main.go:141] libmachine: Using SSH client type: native
	I1226 23:55:27.912980    6784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.179.140 22 <nil> <nil>}
	I1226 23:55:27.912980    6784 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1703634922
	I1226 23:55:28.083885    6784 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Dec 26 23:55:22 UTC 2023
	
	I1226 23:55:28.083963    6784 fix.go:226] clock set: Tue Dec 26 23:55:22 UTC 2023
	 (err=<nil>)
	I1226 23:55:28.083963    6784 start.go:83] releasing machines lock for "running-upgrade-923100", held for 1m14.7986664s
	I1226 23:55:28.083963    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-923100 ).state
	I1226 23:55:30.410011    6784 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:55:30.410272    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:55:30.410390    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-923100 ).networkadapters[0]).ipaddresses[0]
	I1226 23:55:33.225010    6784 main.go:141] libmachine: [stdout =====>] : 172.21.179.140
	
	I1226 23:55:33.225010    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:55:33.229684    6784 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 23:55:33.229684    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-923100 ).state
	I1226 23:55:33.250535    6784 ssh_runner.go:195] Run: cat /version.json
	I1226 23:55:33.250535    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-923100 ).state
	I1226 23:55:35.911396    6784 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:55:35.911396    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:55:35.911396    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-923100 ).networkadapters[0]).ipaddresses[0]
	I1226 23:55:35.993170    6784 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:55:35.993170    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:55:35.993284    6784 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-923100 ).networkadapters[0]).ipaddresses[0]
	I1226 23:55:38.966409    6784 main.go:141] libmachine: [stdout =====>] : 172.21.179.140
	
	I1226 23:55:38.966489    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:55:38.966927    6784 sshutil.go:53] new ssh client: &{IP:172.21.179.140 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-923100\id_rsa Username:docker}
	I1226 23:55:39.010146    6784 main.go:141] libmachine: [stdout =====>] : 172.21.179.140
	
	I1226 23:55:39.010146    6784 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:55:39.010146    6784 sshutil.go:53] new ssh client: &{IP:172.21.179.140 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-923100\id_rsa Username:docker}
	I1226 23:55:39.163187    6784 ssh_runner.go:235] Completed: cat /version.json: (5.9126538s)
	W1226 23:55:39.163335    6784 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1226 23:55:39.163187    6784 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.933505s)
	I1226 23:55:39.177866    6784 ssh_runner.go:195] Run: systemctl --version
	I1226 23:55:39.210593    6784 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1226 23:55:39.220372    6784 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1226 23:55:39.256859    6784 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1226 23:55:39.288264    6784 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1226 23:55:39.302810    6784 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I1226 23:55:39.302810    6784 start.go:475] detecting cgroup driver to use...
	I1226 23:55:39.302810    6784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 23:55:39.354820    6784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1226 23:55:39.393749    6784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1226 23:55:39.412161    6784 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1226 23:55:39.427166    6784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1226 23:55:39.463511    6784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 23:55:39.487630    6784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1226 23:55:39.511732    6784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1226 23:55:39.538542    6784 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 23:55:39.565140    6784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1226 23:55:39.605078    6784 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 23:55:39.629601    6784 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 23:55:39.673593    6784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:55:39.821416    6784 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1226 23:55:39.844082    6784 start.go:475] detecting cgroup driver to use...
	I1226 23:55:39.862395    6784 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1226 23:55:39.904910    6784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 23:55:39.935613    6784 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 23:55:39.999364    6784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 23:55:40.029930    6784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1226 23:55:40.045476    6784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 23:55:40.080489    6784 ssh_runner.go:195] Run: which cri-dockerd
	I1226 23:55:40.105247    6784 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1226 23:55:40.114947    6784 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1226 23:55:40.145253    6784 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1226 23:55:40.296979    6784 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1226 23:55:40.423738    6784 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1226 23:55:40.424305    6784 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1226 23:55:40.453019    6784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 23:55:40.591738    6784 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1226 23:55:41.686141    6784 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0944034s)
	I1226 23:55:41.702947    6784 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1226 23:55:41.851750    6784 out.go:177] 
	W1226 23:55:41.867219    6784 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Tue 2023-12-26 23:47:59 UTC, end at Tue 2023-12-26 23:55:41 UTC. --
	Dec 26 23:49:30 running-upgrade-923100 systemd[1]: Starting Docker Application Container Engine...
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.822476391Z" level=info msg="Starting up"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.825532691Z" level=info msg="libcontainerd: started new containerd process" pid=2754
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.825610991Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.825632391Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.825665691Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.825705491Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.888115091Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.889245991Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.889403991Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.889677891Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.889870791Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.891635691Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.891822691Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.891951791Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.892243691Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.892657791Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.892788291Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.892868091Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.892896191Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.892904791Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924197691Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924356691Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924419591Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924442391Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924462991Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924484091Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924514591Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924540991Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924558991Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924582491Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924930991Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.925146591Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.925939691Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926128191Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926175791Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926192091Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926203091Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926213691Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926224091Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926235291Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926253591Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926264491Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926274991Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926340791Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926363791Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926384191Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926394791Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926561791Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926723591Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926810091Z" level=info msg="containerd successfully booted in 0.040375s"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.938777891Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.938868191Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.938895191Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.938906591Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.940691391Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.940898491Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.940932391Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.940951291Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.999867791Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.999978091Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.999992891Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.000000591Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.000008491Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.000109891Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.000336491Z" level=info msg="Loading containers: start."
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.190781591Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.306790991Z" level=info msg="Loading containers: done."
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.346756591Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.346951891Z" level=info msg="Daemon has completed initialization"
	Dec 26 23:49:31 running-upgrade-923100 systemd[1]: Started Docker Application Container Engine.
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.476976591Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.477108391Z" level=info msg="API listen on [::]:2376"
	Dec 26 23:50:43 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:43.699170212Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/443539ef9f335511c257ec2434180c9b3309eff1d2359121e043ae3cf98c7cc8/shim.sock" debug=false pid=4390
	Dec 26 23:50:43 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:43.772996918Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0b1502eb9c0ec797f18beba2200aad511862cd88ce32341d5dcf34cdc1a628c0/shim.sock" debug=false pid=4414
	Dec 26 23:50:43 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:43.829073646Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/30e71d08b170c6ffeee9e06069f2e0f795c4e52198bfe79b6d8293b25f51a49c/shim.sock" debug=false pid=4432
	Dec 26 23:50:43 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:43.986776846Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a5e3afa5b48dc06ffef4a496a52d0f1efd89c03e4326cc443033dbdf2b83a455/shim.sock" debug=false pid=4488
	Dec 26 23:50:44 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:44.051414868Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3247bd1429d468e4697ddaefa0475ca3438c7e9d3456408f6db5b91129e88836/shim.sock" debug=false pid=4521
	Dec 26 23:50:44 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:44.210918178Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/467c4cec804cefaf010e110609bc3cc02e73a6e349ab7a85ef51e328b911fb80/shim.sock" debug=false pid=4580
	Dec 26 23:50:44 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:44.385574670Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f8da6e667a53352d7b525ef1ea0f8bf8e9a5f8504a049a628ee54faa3ae23c4d/shim.sock" debug=false pid=4649
	Dec 26 23:50:44 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:44.422095626Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c9a41b71bbb717e3887446351c9abcfaca7420d35a02eb98e08de79f3a308d6f/shim.sock" debug=false pid=4674
	Dec 26 23:50:44 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:44.713806478Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/24a3e07f9a7a1af9d1b69bc3ce181a515db87331f9a1a5311c429a0715ca3773/shim.sock" debug=false pid=4755
	Dec 26 23:50:44 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:44.779240301Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d99358fc2278f1a118c452572da4e8508689bd988671b84faf174bcfcd8128f0/shim.sock" debug=false pid=4776
	Dec 26 23:51:06 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:51:06.095502870Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dc5709ac6d2d55115749f8a53cd4de85ae198bc02a3d4f982f8dabb96e43b062/shim.sock" debug=false pid=5573
	Dec 26 23:51:15 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:51:15.299124963Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/daba43e3d7713a15befa1cdd190724845ab18163717531cd3ad3dec838820607/shim.sock" debug=false pid=5662
	Dec 26 23:51:16 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:51:16.931485336Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cbe980195fd76fe96ebdb3df032d4b67ded7ec4fd5a59ef24ff7da3aa45eee74/shim.sock" debug=false pid=5785
	Dec 26 23:51:16 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:51:16.945316431Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/53b2e2b0d34e7b43c38e6fdbc85f34b09cfa2a14562c3a5388f39833530517e9/shim.sock" debug=false pid=5795
	Dec 26 23:51:17 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:51:17.756119086Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dcd7f81dba21836608a2f5652e0027e70b94c0abb6857557ba4d03591a18fbf7/shim.sock" debug=false pid=5929
	Dec 26 23:51:17 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:51:17.868618559Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ec7d771daf6007922bb20d294a70cfeaef1eb6fa8637beb56b2955c21272dfcd/shim.sock" debug=false pid=5954
	Dec 26 23:51:19 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:51:19.007967673Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9e4aabff20c960bea613fb339da5c98ac61649a786e80faa64a440b946185a43/shim.sock" debug=false pid=6070
	Dec 26 23:51:19 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:51:19.387596516Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c50b4d9e8d1e760b9c8bd41ac0eaf8a620b5eeacda2434ea22a92255d3983d07/shim.sock" debug=false pid=6113
	Dec 26 23:54:58 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:58.566269058Z" level=info msg="Processing signal 'terminated'"
	Dec 26 23:54:58 running-upgrade-923100 systemd[1]: Stopping Docker Application Container Engine...
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.795446390Z" level=info msg="shim reaped" id=c50b4d9e8d1e760b9c8bd41ac0eaf8a620b5eeacda2434ea22a92255d3983d07
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.806181496Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.806483705Z" level=warning msg="c50b4d9e8d1e760b9c8bd41ac0eaf8a620b5eeacda2434ea22a92255d3983d07 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c50b4d9e8d1e760b9c8bd41ac0eaf8a620b5eeacda2434ea22a92255d3983d07/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.823592193Z" level=info msg="shim reaped" id=9e4aabff20c960bea613fb339da5c98ac61649a786e80faa64a440b946185a43
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.834955017Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.861189266Z" level=info msg="shim reaped" id=3247bd1429d468e4697ddaefa0475ca3438c7e9d3456408f6db5b91129e88836
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.872006074Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.873161807Z" level=info msg="shim reaped" id=53b2e2b0d34e7b43c38e6fdbc85f34b09cfa2a14562c3a5388f39833530517e9
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.877004617Z" level=info msg="shim reaped" id=d99358fc2278f1a118c452572da4e8508689bd988671b84faf174bcfcd8128f0
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.883542103Z" level=info msg="shim reaped" id=30e71d08b170c6ffeee9e06069f2e0f795c4e52198bfe79b6d8293b25f51a49c
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.883678107Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.887936129Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.888280138Z" level=warning msg="d99358fc2278f1a118c452572da4e8508689bd988671b84faf174bcfcd8128f0 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d99358fc2278f1a118c452572da4e8508689bd988671b84faf174bcfcd8128f0/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.897511302Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.919181620Z" level=info msg="shim reaped" id=a5e3afa5b48dc06ffef4a496a52d0f1efd89c03e4326cc443033dbdf2b83a455
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.950092802Z" level=info msg="shim reaped" id=443539ef9f335511c257ec2434180c9b3309eff1d2359121e043ae3cf98c7cc8
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.953005185Z" level=info msg="shim reaped" id=0b1502eb9c0ec797f18beba2200aad511862cd88ce32341d5dcf34cdc1a628c0
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.960818008Z" level=info msg="shim reaped" id=cbe980195fd76fe96ebdb3df032d4b67ded7ec4fd5a59ef24ff7da3aa45eee74
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.965797450Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.972223633Z" level=info msg="shim reaped" id=24a3e07f9a7a1af9d1b69bc3ce181a515db87331f9a1a5311c429a0715ca3773
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.976891166Z" level=info msg="shim reaped" id=dc5709ac6d2d55115749f8a53cd4de85ae198bc02a3d4f982f8dabb96e43b062
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.977527384Z" level=info msg="shim reaped" id=daba43e3d7713a15befa1cdd190724845ab18163717531cd3ad3dec838820607
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.980121758Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.980228061Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.980315764Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.980384966Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.980454168Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.980944182Z" level=warning msg="24a3e07f9a7a1af9d1b69bc3ce181a515db87331f9a1a5311c429a0715ca3773 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/24a3e07f9a7a1af9d1b69bc3ce181a515db87331f9a1a5311c429a0715ca3773/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.985860222Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.986206232Z" level=warning msg="daba43e3d7713a15befa1cdd190724845ab18163717531cd3ad3dec838820607 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/daba43e3d7713a15befa1cdd190724845ab18163717531cd3ad3dec838820607/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:00 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:00.013963016Z" level=info msg="shim reaped" id=f8da6e667a53352d7b525ef1ea0f8bf8e9a5f8504a049a628ee54faa3ae23c4d
	Dec 26 23:55:00 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:00.025660642Z" level=info msg="shim reaped" id=467c4cec804cefaf010e110609bc3cc02e73a6e349ab7a85ef51e328b911fb80
	Dec 26 23:55:00 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:00.029634153Z" level=warning msg="467c4cec804cefaf010e110609bc3cc02e73a6e349ab7a85ef51e328b911fb80 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/467c4cec804cefaf010e110609bc3cc02e73a6e349ab7a85ef51e328b911fb80/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:00 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:00.035743424Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:00 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:00.035770425Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:00 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:00.035869527Z" level=warning msg="f8da6e667a53352d7b525ef1ea0f8bf8e9a5f8504a049a628ee54faa3ae23c4d cleanup: failed to unmount IPC: umount /var/lib/docker/containers/f8da6e667a53352d7b525ef1ea0f8bf8e9a5f8504a049a628ee54faa3ae23c4d/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:00 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:00.463493366Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fff097178011d213815b315ddb5ccbea50d988d1170e58d76a328dfbd94fd2cf/shim.sock" debug=false pid=9214
	Dec 26 23:55:00 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:00.828636161Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/70c310d3c2e97495d16ad0118ab21b241329085eda51073e7dedfe1875c8833f/shim.sock" debug=false pid=9279
	Dec 26 23:55:01 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:01.280677714Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fd35c2fc73fadb42c2a0aebb2f892d992f80259abdc361d32f1c4e1acdacd621/shim.sock" debug=false pid=9348
	Dec 26 23:55:01 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:01.456971231Z" level=info msg="shim reaped" id=70c310d3c2e97495d16ad0118ab21b241329085eda51073e7dedfe1875c8833f
	Dec 26 23:55:01 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:01.467629722Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:01 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:01.467891429Z" level=warning msg="70c310d3c2e97495d16ad0118ab21b241329085eda51073e7dedfe1875c8833f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/70c310d3c2e97495d16ad0118ab21b241329085eda51073e7dedfe1875c8833f/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:01 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:01.645571583Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2f9943e5fda32e0098a127a847f0b17b60ff34298c2f6585510efea1229ef532/shim.sock" debug=false pid=9417
	Dec 26 23:55:04 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:04.137431177Z" level=info msg="shim reaped" id=ec7d771daf6007922bb20d294a70cfeaef1eb6fa8637beb56b2955c21272dfcd
	Dec 26 23:55:04 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:04.147284329Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:04 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:04.147470434Z" level=warning msg="ec7d771daf6007922bb20d294a70cfeaef1eb6fa8637beb56b2955c21272dfcd cleanup: failed to unmount IPC: umount /var/lib/docker/containers/ec7d771daf6007922bb20d294a70cfeaef1eb6fa8637beb56b2955c21272dfcd/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:04 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:04.159535142Z" level=info msg="shim reaped" id=dcd7f81dba21836608a2f5652e0027e70b94c0abb6857557ba4d03591a18fbf7
	Dec 26 23:55:04 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:04.169177189Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:04 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:04.169480197Z" level=warning msg="dcd7f81dba21836608a2f5652e0027e70b94c0abb6857557ba4d03591a18fbf7 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/dcd7f81dba21836608a2f5652e0027e70b94c0abb6857557ba4d03591a18fbf7/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:04 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:04.618250978Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0f29597111d44b93bf25a62e99035c63a214198c92a6de5dc9c80b8ff41490dd/shim.sock" debug=false pid=9596
	Dec 26 23:55:04 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:04.642330794Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6d34b628563ea1d999eb4f286dc556f55c4b196a8a278b85db97155cb6d88dda/shim.sock" debug=false pid=9610
	Dec 26 23:55:05 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:05.294180005Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7b4ec8c6eee81fd888f3a0d273014b112ee5c702622af153bc5231c15798a618/shim.sock" debug=false pid=9721
	Dec 26 23:55:05 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:05.344693269Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8a502de5e752ac357c0489948317bec736ba1c6b4356dd155ae3c603a69147f6/shim.sock" debug=false pid=9738
	Dec 26 23:55:08 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:08.960559740Z" level=info msg="Container c9a41b71bbb717e3887446351c9abcfaca7420d35a02eb98e08de79f3a308d6f failed to exit within 10 seconds of signal 15 - using the force"
	Dec 26 23:55:09 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:09.155365315Z" level=info msg="shim reaped" id=c9a41b71bbb717e3887446351c9abcfaca7420d35a02eb98e08de79f3a308d6f
	Dec 26 23:55:09 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:09.165771753Z" level=warning msg="c9a41b71bbb717e3887446351c9abcfaca7420d35a02eb98e08de79f3a308d6f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c9a41b71bbb717e3887446351c9abcfaca7420d35a02eb98e08de79f3a308d6f/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:09 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:09.165822054Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:09 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:09.260970329Z" level=info msg="Daemon shutdown complete"
	Dec 26 23:55:09 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:09.261279136Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Dec 26 23:55:09 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:09.261420940Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 26 23:55:09 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:09.263312283Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: docker.service: Succeeded.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: Stopped Docker Application Container Engine.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: docker.service: Found left-over process 9214 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: docker.service: Found left-over process 9348 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: docker.service: Found left-over process 9417 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: docker.service: Found left-over process 9596 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: docker.service: Found left-over process 9610 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: docker.service: Found left-over process 9721 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: docker.service: Found left-over process 9738 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: Starting Docker Application Container Engine...
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.329330284Z" level=info msg="Starting up"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.332267850Z" level=info msg="libcontainerd: started new containerd process" pid=9917
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.332341352Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.332365352Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.332397953Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.332413153Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.380436926Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.380940738Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.381529751Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.381902159Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.382074463Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.385331536Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.385477439Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.386433060Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.387321580Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.387745090Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.387852492Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.387882493Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.387892493Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.387899293Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388098397Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388197300Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388243901Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388259801Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388272501Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388286602Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388305902Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388330903Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388344503Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388355603Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.418615279Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.418867485Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.419460498Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.420886430Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421102435Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421125335Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421196537Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421217537Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421229438Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421243138Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421255138Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421266639Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421278139Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421317140Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421332940Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421345440Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421369641Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421539545Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421719349Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421735149Z" level=info msg="containerd successfully booted in 0.044392s"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.431904676Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.431994578Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.432078380Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.432118981Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.435255951Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.435449455Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.435523157Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.435545558Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.440674972Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.536958624Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.537307931Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.537421334Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.537436534Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.537445234Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.537452735Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.537744341Z" level=info msg="Loading containers: start."
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.244537509Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.283697964Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.305636243Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=0f29597111d44b93bf25a62e99035c63a214198c92a6de5dc9c80b8ff41490dd path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/0f29597111d44b93bf25a62e99035c63a214198c92a6de5dc9c80b8ff41490dd"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.306175255Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.307357581Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=fff097178011d213815b315ddb5ccbea50d988d1170e58d76a328dfbd94fd2cf path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/fff097178011d213815b315ddb5ccbea50d988d1170e58d76a328dfbd94fd2cf"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.307806290Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.358942007Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.359953429Z" level=warning msg="2f9943e5fda32e0098a127a847f0b17b60ff34298c2f6585510efea1229ef532 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/2f9943e5fda32e0098a127a847f0b17b60ff34298c2f6585510efea1229ef532/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.364179921Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.380730183Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.390664599Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=fd35c2fc73fadb42c2a0aebb2f892d992f80259abdc361d32f1c4e1acdacd621 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/fd35c2fc73fadb42c2a0aebb2f892d992f80259abdc361d32f1c4e1acdacd621"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.391131810Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.392557941Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=2f9943e5fda32e0098a127a847f0b17b60ff34298c2f6585510efea1229ef532 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/2f9943e5fda32e0098a127a847f0b17b60ff34298c2f6585510efea1229ef532"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.392974750Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.398469570Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=6d34b628563ea1d999eb4f286dc556f55c4b196a8a278b85db97155cb6d88dda path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/6d34b628563ea1d999eb4f286dc556f55c4b196a8a278b85db97155cb6d88dda"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.431881399Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.602263219Z" level=info msg="Removing stale sandbox 0871541ae54b56291376ae1b1587b0ace60e3ec62e3c07ddb0539db3e7dc26b3 (fd35c2fc73fadb42c2a0aebb2f892d992f80259abdc361d32f1c4e1acdacd621)"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.610208793Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint f0d50d190737961676ab5ad970f8ae7faa7f562b917d733a96f2f36868a1b2cb 5a3209b9563521d1e72a91248ede29fb62eb50ef020525f0d6efe4bc6a39086f], retrying...."
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.766245599Z" level=info msg="Removing stale sandbox 0f7eadd73e3d5fa25d1aa508e4f2c8472c927a75dde1a4ec0dde6ed84f877eb9 (6d34b628563ea1d999eb4f286dc556f55c4b196a8a278b85db97155cb6d88dda)"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.784866906Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint fa1235101a7df5310488b7dc536e5f49bf34b619b37c00ecda83fe38ab180f9c 1ee3e13c16e06737d8ab01d5d26f5628b9727f947a4e4e8d9c0b0376d07ed740], retrying...."
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.974475446Z" level=info msg="Removing stale sandbox 59c27b4891f6eb922d9e7ea78d4177e7ccf90b20940a739bb533a5c2a6e075b0 (fff097178011d213815b315ddb5ccbea50d988d1170e58d76a328dfbd94fd2cf)"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.986355105Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint f0d50d190737961676ab5ad970f8ae7faa7f562b917d733a96f2f36868a1b2cb 823da2b00f45d8fc6e213d96957c51066523a84b5922d8e1fa1e4c242dfbbe8f], retrying...."
	Dec 26 23:55:12 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:12.161702653Z" level=info msg="Removing stale sandbox 9e5fffdb723f3854fd992e40c6f8161b2f3f24f39803059be693b46a7244cf37 (0f29597111d44b93bf25a62e99035c63a214198c92a6de5dc9c80b8ff41490dd)"
	Dec 26 23:55:12 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:12.176611671Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint fa1235101a7df5310488b7dc536e5f49bf34b619b37c00ecda83fe38ab180f9c 7c711fb40fb7dfcd1dc220933c57e8ba74dcd7171d6cec4658aeef75cfd58a0f], retrying...."
	Dec 26 23:55:12 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:12.212408834Z" level=info msg="There are old running containers, the network config will not take affect"
	Dec 26 23:55:12 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:12.275794386Z" level=info msg="Loading containers: done."
	Dec 26 23:55:12 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:12.342694913Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 26 23:55:12 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:12.342917018Z" level=info msg="Daemon has completed initialization"
	Dec 26 23:55:12 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:12.437857143Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 26 23:55:12 running-upgrade-923100 systemd[1]: Started Docker Application Container Engine.
	Dec 26 23:55:12 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:12.440174793Z" level=info msg="API listen on [::]:2376"
	Dec 26 23:55:16 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:16.088248813Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:16 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:16.088978827Z" level=warning msg="7b4ec8c6eee81fd888f3a0d273014b112ee5c702622af153bc5231c15798a618 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7b4ec8c6eee81fd888f3a0d273014b112ee5c702622af153bc5231c15798a618/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:16 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:16.094709338Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=7b4ec8c6eee81fd888f3a0d273014b112ee5c702622af153bc5231c15798a618 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/7b4ec8c6eee81fd888f3a0d273014b112ee5c702622af153bc5231c15798a618"
	Dec 26 23:55:16 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:16.095487453Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:16 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:16.127687978Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:16 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:16.128222088Z" level=warning msg="8a502de5e752ac357c0489948317bec736ba1c6b4356dd155ae3c603a69147f6 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/8a502de5e752ac357c0489948317bec736ba1c6b4356dd155ae3c603a69147f6/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:16 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:16.143493184Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=8a502de5e752ac357c0489948317bec736ba1c6b4356dd155ae3c603a69147f6 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8a502de5e752ac357c0489948317bec736ba1c6b4356dd155ae3c603a69147f6"
	Dec 26 23:55:16 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:16.143878192Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:40 running-upgrade-923100 systemd[1]: Stopping Docker Application Container Engine...
	Dec 26 23:55:40 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:40.596770904Z" level=info msg="Processing signal 'terminated'"
	Dec 26 23:55:40 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:40.598848483Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 26 23:55:40 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:40.599575775Z" level=info msg="Daemon shutdown complete"
	Dec 26 23:55:40 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:40.599632874Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 26 23:55:40 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:40.599639374Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 26 23:55:41 running-upgrade-923100 systemd[1]: docker.service: Succeeded.
	Dec 26 23:55:41 running-upgrade-923100 systemd[1]: Stopped Docker Application Container Engine.
	Dec 26 23:55:41 running-upgrade-923100 systemd[1]: Starting Docker Application Container Engine...
	Dec 26 23:55:41 running-upgrade-923100 dockerd[10796]: time="2023-12-26T23:55:41.667887122Z" level=info msg="Starting up"
	Dec 26 23:55:41 running-upgrade-923100 dockerd[10796]: time="2023-12-26T23:55:41.670342497Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 26 23:55:41 running-upgrade-923100 dockerd[10796]: time="2023-12-26T23:55:41.670462695Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 26 23:55:41 running-upgrade-923100 dockerd[10796]: time="2023-12-26T23:55:41.670493895Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 26 23:55:41 running-upgrade-923100 dockerd[10796]: time="2023-12-26T23:55:41.670518195Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 26 23:55:41 running-upgrade-923100 dockerd[10796]: time="2023-12-26T23:55:41.670880691Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Dec 26 23:55:41 running-upgrade-923100 dockerd[10796]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Dec 26 23:55:41 running-upgrade-923100 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 26 23:55:41 running-upgrade-923100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 26 23:55:41 running-upgrade-923100 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Tue 2023-12-26 23:47:59 UTC, end at Tue 2023-12-26 23:55:41 UTC. --
	Dec 26 23:49:30 running-upgrade-923100 systemd[1]: Starting Docker Application Container Engine...
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.822476391Z" level=info msg="Starting up"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.825532691Z" level=info msg="libcontainerd: started new containerd process" pid=2754
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.825610991Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.825632391Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.825665691Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.825705491Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.888115091Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.889245991Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.889403991Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.889677891Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.889870791Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.891635691Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.891822691Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.891951791Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.892243691Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.892657791Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.892788291Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.892868091Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.892896191Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.892904791Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924197691Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924356691Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924419591Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924442391Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924462991Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924484091Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924514591Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924540991Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924558991Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924582491Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.924930991Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.925146591Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.925939691Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926128191Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926175791Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926192091Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926203091Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926213691Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926224091Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926235291Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926253591Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926264491Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926274991Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926340791Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926363791Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926384191Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926394791Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926561791Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926723591Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.926810091Z" level=info msg="containerd successfully booted in 0.040375s"
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.938777891Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.938868191Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.938895191Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.938906591Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.940691391Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.940898491Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.940932391Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 26 23:49:30 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.940951291Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.999867791Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.999978091Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:30.999992891Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.000000591Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.000008491Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.000109891Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.000336491Z" level=info msg="Loading containers: start."
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.190781591Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.306790991Z" level=info msg="Loading containers: done."
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.346756591Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.346951891Z" level=info msg="Daemon has completed initialization"
	Dec 26 23:49:31 running-upgrade-923100 systemd[1]: Started Docker Application Container Engine.
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.476976591Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 26 23:49:31 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:49:31.477108391Z" level=info msg="API listen on [::]:2376"
	Dec 26 23:50:43 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:43.699170212Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/443539ef9f335511c257ec2434180c9b3309eff1d2359121e043ae3cf98c7cc8/shim.sock" debug=false pid=4390
	Dec 26 23:50:43 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:43.772996918Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0b1502eb9c0ec797f18beba2200aad511862cd88ce32341d5dcf34cdc1a628c0/shim.sock" debug=false pid=4414
	Dec 26 23:50:43 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:43.829073646Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/30e71d08b170c6ffeee9e06069f2e0f795c4e52198bfe79b6d8293b25f51a49c/shim.sock" debug=false pid=4432
	Dec 26 23:50:43 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:43.986776846Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a5e3afa5b48dc06ffef4a496a52d0f1efd89c03e4326cc443033dbdf2b83a455/shim.sock" debug=false pid=4488
	Dec 26 23:50:44 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:44.051414868Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3247bd1429d468e4697ddaefa0475ca3438c7e9d3456408f6db5b91129e88836/shim.sock" debug=false pid=4521
	Dec 26 23:50:44 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:44.210918178Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/467c4cec804cefaf010e110609bc3cc02e73a6e349ab7a85ef51e328b911fb80/shim.sock" debug=false pid=4580
	Dec 26 23:50:44 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:44.385574670Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f8da6e667a53352d7b525ef1ea0f8bf8e9a5f8504a049a628ee54faa3ae23c4d/shim.sock" debug=false pid=4649
	Dec 26 23:50:44 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:44.422095626Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c9a41b71bbb717e3887446351c9abcfaca7420d35a02eb98e08de79f3a308d6f/shim.sock" debug=false pid=4674
	Dec 26 23:50:44 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:44.713806478Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/24a3e07f9a7a1af9d1b69bc3ce181a515db87331f9a1a5311c429a0715ca3773/shim.sock" debug=false pid=4755
	Dec 26 23:50:44 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:50:44.779240301Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d99358fc2278f1a118c452572da4e8508689bd988671b84faf174bcfcd8128f0/shim.sock" debug=false pid=4776
	Dec 26 23:51:06 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:51:06.095502870Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dc5709ac6d2d55115749f8a53cd4de85ae198bc02a3d4f982f8dabb96e43b062/shim.sock" debug=false pid=5573
	Dec 26 23:51:15 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:51:15.299124963Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/daba43e3d7713a15befa1cdd190724845ab18163717531cd3ad3dec838820607/shim.sock" debug=false pid=5662
	Dec 26 23:51:16 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:51:16.931485336Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cbe980195fd76fe96ebdb3df032d4b67ded7ec4fd5a59ef24ff7da3aa45eee74/shim.sock" debug=false pid=5785
	Dec 26 23:51:16 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:51:16.945316431Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/53b2e2b0d34e7b43c38e6fdbc85f34b09cfa2a14562c3a5388f39833530517e9/shim.sock" debug=false pid=5795
	Dec 26 23:51:17 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:51:17.756119086Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dcd7f81dba21836608a2f5652e0027e70b94c0abb6857557ba4d03591a18fbf7/shim.sock" debug=false pid=5929
	Dec 26 23:51:17 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:51:17.868618559Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ec7d771daf6007922bb20d294a70cfeaef1eb6fa8637beb56b2955c21272dfcd/shim.sock" debug=false pid=5954
	Dec 26 23:51:19 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:51:19.007967673Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9e4aabff20c960bea613fb339da5c98ac61649a786e80faa64a440b946185a43/shim.sock" debug=false pid=6070
	Dec 26 23:51:19 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:51:19.387596516Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c50b4d9e8d1e760b9c8bd41ac0eaf8a620b5eeacda2434ea22a92255d3983d07/shim.sock" debug=false pid=6113
	Dec 26 23:54:58 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:58.566269058Z" level=info msg="Processing signal 'terminated'"
	Dec 26 23:54:58 running-upgrade-923100 systemd[1]: Stopping Docker Application Container Engine...
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.795446390Z" level=info msg="shim reaped" id=c50b4d9e8d1e760b9c8bd41ac0eaf8a620b5eeacda2434ea22a92255d3983d07
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.806181496Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.806483705Z" level=warning msg="c50b4d9e8d1e760b9c8bd41ac0eaf8a620b5eeacda2434ea22a92255d3983d07 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c50b4d9e8d1e760b9c8bd41ac0eaf8a620b5eeacda2434ea22a92255d3983d07/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.823592193Z" level=info msg="shim reaped" id=9e4aabff20c960bea613fb339da5c98ac61649a786e80faa64a440b946185a43
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.834955017Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.861189266Z" level=info msg="shim reaped" id=3247bd1429d468e4697ddaefa0475ca3438c7e9d3456408f6db5b91129e88836
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.872006074Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.873161807Z" level=info msg="shim reaped" id=53b2e2b0d34e7b43c38e6fdbc85f34b09cfa2a14562c3a5388f39833530517e9
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.877004617Z" level=info msg="shim reaped" id=d99358fc2278f1a118c452572da4e8508689bd988671b84faf174bcfcd8128f0
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.883542103Z" level=info msg="shim reaped" id=30e71d08b170c6ffeee9e06069f2e0f795c4e52198bfe79b6d8293b25f51a49c
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.883678107Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.887936129Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.888280138Z" level=warning msg="d99358fc2278f1a118c452572da4e8508689bd988671b84faf174bcfcd8128f0 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d99358fc2278f1a118c452572da4e8508689bd988671b84faf174bcfcd8128f0/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.897511302Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.919181620Z" level=info msg="shim reaped" id=a5e3afa5b48dc06ffef4a496a52d0f1efd89c03e4326cc443033dbdf2b83a455
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.950092802Z" level=info msg="shim reaped" id=443539ef9f335511c257ec2434180c9b3309eff1d2359121e043ae3cf98c7cc8
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.953005185Z" level=info msg="shim reaped" id=0b1502eb9c0ec797f18beba2200aad511862cd88ce32341d5dcf34cdc1a628c0
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.960818008Z" level=info msg="shim reaped" id=cbe980195fd76fe96ebdb3df032d4b67ded7ec4fd5a59ef24ff7da3aa45eee74
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.965797450Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.972223633Z" level=info msg="shim reaped" id=24a3e07f9a7a1af9d1b69bc3ce181a515db87331f9a1a5311c429a0715ca3773
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.976891166Z" level=info msg="shim reaped" id=dc5709ac6d2d55115749f8a53cd4de85ae198bc02a3d4f982f8dabb96e43b062
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.977527384Z" level=info msg="shim reaped" id=daba43e3d7713a15befa1cdd190724845ab18163717531cd3ad3dec838820607
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.980121758Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.980228061Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.980315764Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.980384966Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.980454168Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.980944182Z" level=warning msg="24a3e07f9a7a1af9d1b69bc3ce181a515db87331f9a1a5311c429a0715ca3773 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/24a3e07f9a7a1af9d1b69bc3ce181a515db87331f9a1a5311c429a0715ca3773/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.985860222Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:54:59 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:54:59.986206232Z" level=warning msg="daba43e3d7713a15befa1cdd190724845ab18163717531cd3ad3dec838820607 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/daba43e3d7713a15befa1cdd190724845ab18163717531cd3ad3dec838820607/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:00 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:00.013963016Z" level=info msg="shim reaped" id=f8da6e667a53352d7b525ef1ea0f8bf8e9a5f8504a049a628ee54faa3ae23c4d
	Dec 26 23:55:00 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:00.025660642Z" level=info msg="shim reaped" id=467c4cec804cefaf010e110609bc3cc02e73a6e349ab7a85ef51e328b911fb80
	Dec 26 23:55:00 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:00.029634153Z" level=warning msg="467c4cec804cefaf010e110609bc3cc02e73a6e349ab7a85ef51e328b911fb80 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/467c4cec804cefaf010e110609bc3cc02e73a6e349ab7a85ef51e328b911fb80/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:00 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:00.035743424Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:00 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:00.035770425Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:00 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:00.035869527Z" level=warning msg="f8da6e667a53352d7b525ef1ea0f8bf8e9a5f8504a049a628ee54faa3ae23c4d cleanup: failed to unmount IPC: umount /var/lib/docker/containers/f8da6e667a53352d7b525ef1ea0f8bf8e9a5f8504a049a628ee54faa3ae23c4d/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:00 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:00.463493366Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fff097178011d213815b315ddb5ccbea50d988d1170e58d76a328dfbd94fd2cf/shim.sock" debug=false pid=9214
	Dec 26 23:55:00 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:00.828636161Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/70c310d3c2e97495d16ad0118ab21b241329085eda51073e7dedfe1875c8833f/shim.sock" debug=false pid=9279
	Dec 26 23:55:01 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:01.280677714Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fd35c2fc73fadb42c2a0aebb2f892d992f80259abdc361d32f1c4e1acdacd621/shim.sock" debug=false pid=9348
	Dec 26 23:55:01 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:01.456971231Z" level=info msg="shim reaped" id=70c310d3c2e97495d16ad0118ab21b241329085eda51073e7dedfe1875c8833f
	Dec 26 23:55:01 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:01.467629722Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:01 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:01.467891429Z" level=warning msg="70c310d3c2e97495d16ad0118ab21b241329085eda51073e7dedfe1875c8833f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/70c310d3c2e97495d16ad0118ab21b241329085eda51073e7dedfe1875c8833f/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:01 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:01.645571583Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2f9943e5fda32e0098a127a847f0b17b60ff34298c2f6585510efea1229ef532/shim.sock" debug=false pid=9417
	Dec 26 23:55:04 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:04.137431177Z" level=info msg="shim reaped" id=ec7d771daf6007922bb20d294a70cfeaef1eb6fa8637beb56b2955c21272dfcd
	Dec 26 23:55:04 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:04.147284329Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:04 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:04.147470434Z" level=warning msg="ec7d771daf6007922bb20d294a70cfeaef1eb6fa8637beb56b2955c21272dfcd cleanup: failed to unmount IPC: umount /var/lib/docker/containers/ec7d771daf6007922bb20d294a70cfeaef1eb6fa8637beb56b2955c21272dfcd/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:04 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:04.159535142Z" level=info msg="shim reaped" id=dcd7f81dba21836608a2f5652e0027e70b94c0abb6857557ba4d03591a18fbf7
	Dec 26 23:55:04 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:04.169177189Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:04 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:04.169480197Z" level=warning msg="dcd7f81dba21836608a2f5652e0027e70b94c0abb6857557ba4d03591a18fbf7 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/dcd7f81dba21836608a2f5652e0027e70b94c0abb6857557ba4d03591a18fbf7/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:04 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:04.618250978Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0f29597111d44b93bf25a62e99035c63a214198c92a6de5dc9c80b8ff41490dd/shim.sock" debug=false pid=9596
	Dec 26 23:55:04 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:04.642330794Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6d34b628563ea1d999eb4f286dc556f55c4b196a8a278b85db97155cb6d88dda/shim.sock" debug=false pid=9610
	Dec 26 23:55:05 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:05.294180005Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7b4ec8c6eee81fd888f3a0d273014b112ee5c702622af153bc5231c15798a618/shim.sock" debug=false pid=9721
	Dec 26 23:55:05 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:05.344693269Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8a502de5e752ac357c0489948317bec736ba1c6b4356dd155ae3c603a69147f6/shim.sock" debug=false pid=9738
	Dec 26 23:55:08 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:08.960559740Z" level=info msg="Container c9a41b71bbb717e3887446351c9abcfaca7420d35a02eb98e08de79f3a308d6f failed to exit within 10 seconds of signal 15 - using the force"
	Dec 26 23:55:09 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:09.155365315Z" level=info msg="shim reaped" id=c9a41b71bbb717e3887446351c9abcfaca7420d35a02eb98e08de79f3a308d6f
	Dec 26 23:55:09 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:09.165771753Z" level=warning msg="c9a41b71bbb717e3887446351c9abcfaca7420d35a02eb98e08de79f3a308d6f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c9a41b71bbb717e3887446351c9abcfaca7420d35a02eb98e08de79f3a308d6f/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:09 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:09.165822054Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:09 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:09.260970329Z" level=info msg="Daemon shutdown complete"
	Dec 26 23:55:09 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:09.261279136Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Dec 26 23:55:09 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:09.261420940Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 26 23:55:09 running-upgrade-923100 dockerd[2746]: time="2023-12-26T23:55:09.263312283Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: docker.service: Succeeded.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: Stopped Docker Application Container Engine.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: docker.service: Found left-over process 9214 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: docker.service: Found left-over process 9348 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: docker.service: Found left-over process 9417 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: docker.service: Found left-over process 9596 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: docker.service: Found left-over process 9610 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: docker.service: Found left-over process 9721 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: docker.service: Found left-over process 9738 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 26 23:55:10 running-upgrade-923100 systemd[1]: Starting Docker Application Container Engine...
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.329330284Z" level=info msg="Starting up"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.332267850Z" level=info msg="libcontainerd: started new containerd process" pid=9917
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.332341352Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.332365352Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.332397953Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.332413153Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.380436926Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.380940738Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.381529751Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.381902159Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.382074463Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.385331536Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.385477439Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.386433060Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.387321580Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.387745090Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.387852492Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.387882493Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.387892493Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.387899293Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388098397Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388197300Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388243901Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388259801Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388272501Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388286602Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388305902Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388330903Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388344503Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.388355603Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.418615279Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.418867485Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.419460498Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.420886430Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421102435Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421125335Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421196537Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421217537Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421229438Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421243138Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421255138Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421266639Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421278139Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421317140Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421332940Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421345440Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421369641Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421539545Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421719349Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.421735149Z" level=info msg="containerd successfully booted in 0.044392s"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.431904676Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.431994578Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.432078380Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.432118981Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.435255951Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.435449455Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.435523157Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.435545558Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.440674972Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.536958624Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.537307931Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.537421334Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.537436534Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.537445234Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.537452735Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 26 23:55:10 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:10.537744341Z" level=info msg="Loading containers: start."
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.244537509Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.283697964Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.305636243Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=0f29597111d44b93bf25a62e99035c63a214198c92a6de5dc9c80b8ff41490dd path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/0f29597111d44b93bf25a62e99035c63a214198c92a6de5dc9c80b8ff41490dd"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.306175255Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.307357581Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=fff097178011d213815b315ddb5ccbea50d988d1170e58d76a328dfbd94fd2cf path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/fff097178011d213815b315ddb5ccbea50d988d1170e58d76a328dfbd94fd2cf"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.307806290Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.358942007Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.359953429Z" level=warning msg="2f9943e5fda32e0098a127a847f0b17b60ff34298c2f6585510efea1229ef532 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/2f9943e5fda32e0098a127a847f0b17b60ff34298c2f6585510efea1229ef532/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.364179921Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.380730183Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.390664599Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=fd35c2fc73fadb42c2a0aebb2f892d992f80259abdc361d32f1c4e1acdacd621 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/fd35c2fc73fadb42c2a0aebb2f892d992f80259abdc361d32f1c4e1acdacd621"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.391131810Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.392557941Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=2f9943e5fda32e0098a127a847f0b17b60ff34298c2f6585510efea1229ef532 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/2f9943e5fda32e0098a127a847f0b17b60ff34298c2f6585510efea1229ef532"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.392974750Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.398469570Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=6d34b628563ea1d999eb4f286dc556f55c4b196a8a278b85db97155cb6d88dda path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/6d34b628563ea1d999eb4f286dc556f55c4b196a8a278b85db97155cb6d88dda"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.431881399Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.602263219Z" level=info msg="Removing stale sandbox 0871541ae54b56291376ae1b1587b0ace60e3ec62e3c07ddb0539db3e7dc26b3 (fd35c2fc73fadb42c2a0aebb2f892d992f80259abdc361d32f1c4e1acdacd621)"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.610208793Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint f0d50d190737961676ab5ad970f8ae7faa7f562b917d733a96f2f36868a1b2cb 5a3209b9563521d1e72a91248ede29fb62eb50ef020525f0d6efe4bc6a39086f], retrying...."
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.766245599Z" level=info msg="Removing stale sandbox 0f7eadd73e3d5fa25d1aa508e4f2c8472c927a75dde1a4ec0dde6ed84f877eb9 (6d34b628563ea1d999eb4f286dc556f55c4b196a8a278b85db97155cb6d88dda)"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.784866906Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint fa1235101a7df5310488b7dc536e5f49bf34b619b37c00ecda83fe38ab180f9c 1ee3e13c16e06737d8ab01d5d26f5628b9727f947a4e4e8d9c0b0376d07ed740], retrying...."
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.974475446Z" level=info msg="Removing stale sandbox 59c27b4891f6eb922d9e7ea78d4177e7ccf90b20940a739bb533a5c2a6e075b0 (fff097178011d213815b315ddb5ccbea50d988d1170e58d76a328dfbd94fd2cf)"
	Dec 26 23:55:11 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:11.986355105Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint f0d50d190737961676ab5ad970f8ae7faa7f562b917d733a96f2f36868a1b2cb 823da2b00f45d8fc6e213d96957c51066523a84b5922d8e1fa1e4c242dfbbe8f], retrying...."
	Dec 26 23:55:12 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:12.161702653Z" level=info msg="Removing stale sandbox 9e5fffdb723f3854fd992e40c6f8161b2f3f24f39803059be693b46a7244cf37 (0f29597111d44b93bf25a62e99035c63a214198c92a6de5dc9c80b8ff41490dd)"
	Dec 26 23:55:12 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:12.176611671Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint fa1235101a7df5310488b7dc536e5f49bf34b619b37c00ecda83fe38ab180f9c 7c711fb40fb7dfcd1dc220933c57e8ba74dcd7171d6cec4658aeef75cfd58a0f], retrying...."
	Dec 26 23:55:12 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:12.212408834Z" level=info msg="There are old running containers, the network config will not take affect"
	Dec 26 23:55:12 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:12.275794386Z" level=info msg="Loading containers: done."
	Dec 26 23:55:12 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:12.342694913Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 26 23:55:12 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:12.342917018Z" level=info msg="Daemon has completed initialization"
	Dec 26 23:55:12 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:12.437857143Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 26 23:55:12 running-upgrade-923100 systemd[1]: Started Docker Application Container Engine.
	Dec 26 23:55:12 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:12.440174793Z" level=info msg="API listen on [::]:2376"
	Dec 26 23:55:16 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:16.088248813Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:16 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:16.088978827Z" level=warning msg="7b4ec8c6eee81fd888f3a0d273014b112ee5c702622af153bc5231c15798a618 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7b4ec8c6eee81fd888f3a0d273014b112ee5c702622af153bc5231c15798a618/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:16 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:16.094709338Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=7b4ec8c6eee81fd888f3a0d273014b112ee5c702622af153bc5231c15798a618 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/7b4ec8c6eee81fd888f3a0d273014b112ee5c702622af153bc5231c15798a618"
	Dec 26 23:55:16 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:16.095487453Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:16 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:16.127687978Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:16 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:16.128222088Z" level=warning msg="8a502de5e752ac357c0489948317bec736ba1c6b4356dd155ae3c603a69147f6 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/8a502de5e752ac357c0489948317bec736ba1c6b4356dd155ae3c603a69147f6/mounts/shm, flags: 0x2: no such file or directory"
	Dec 26 23:55:16 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:16.143493184Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=8a502de5e752ac357c0489948317bec736ba1c6b4356dd155ae3c603a69147f6 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8a502de5e752ac357c0489948317bec736ba1c6b4356dd155ae3c603a69147f6"
	Dec 26 23:55:16 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:16.143878192Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 26 23:55:40 running-upgrade-923100 systemd[1]: Stopping Docker Application Container Engine...
	Dec 26 23:55:40 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:40.596770904Z" level=info msg="Processing signal 'terminated'"
	Dec 26 23:55:40 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:40.598848483Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 26 23:55:40 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:40.599575775Z" level=info msg="Daemon shutdown complete"
	Dec 26 23:55:40 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:40.599632874Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 26 23:55:40 running-upgrade-923100 dockerd[9909]: time="2023-12-26T23:55:40.599639374Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 26 23:55:41 running-upgrade-923100 systemd[1]: docker.service: Succeeded.
	Dec 26 23:55:41 running-upgrade-923100 systemd[1]: Stopped Docker Application Container Engine.
	Dec 26 23:55:41 running-upgrade-923100 systemd[1]: Starting Docker Application Container Engine...
	Dec 26 23:55:41 running-upgrade-923100 dockerd[10796]: time="2023-12-26T23:55:41.667887122Z" level=info msg="Starting up"
	Dec 26 23:55:41 running-upgrade-923100 dockerd[10796]: time="2023-12-26T23:55:41.670342497Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 26 23:55:41 running-upgrade-923100 dockerd[10796]: time="2023-12-26T23:55:41.670462695Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 26 23:55:41 running-upgrade-923100 dockerd[10796]: time="2023-12-26T23:55:41.670493895Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 26 23:55:41 running-upgrade-923100 dockerd[10796]: time="2023-12-26T23:55:41.670518195Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 26 23:55:41 running-upgrade-923100 dockerd[10796]: time="2023-12-26T23:55:41.670880691Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Dec 26 23:55:41 running-upgrade-923100 dockerd[10796]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Dec 26 23:55:41 running-upgrade-923100 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 26 23:55:41 running-upgrade-923100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 26 23:55:41 running-upgrade-923100 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1226 23:55:41.868480    6784 out.go:239] * 
	* 
	W1226 23:55:41.869430    6784 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1226 23:55:41.873954    6784 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-windows-amd64.exe start -p running-upgrade-923100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-26 23:55:42.3158702 +0000 UTC m=+7777.862503301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-923100 -n running-upgrade-923100
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-923100 -n running-upgrade-923100: exit status 6 (13.6072713s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 23:55:42.466773   11316 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1226 23:55:55.854738   11316 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-923100" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-923100" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-923100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-923100
E1226 23:56:05.434896   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-923100: (1m11.3519261s)
--- FAIL: TestRunningBinaryUpgrade (645.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (299.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-152600 --driver=hyperv
E1226 23:42:28.820368   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 23:43:36.120593   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 23:44:01.493099   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-152600 --driver=hyperv: exit status 1 (4m59.6082654s)

                                                
                                                
-- stdout --
	* [NoKubernetes-152600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node NoKubernetes-152600 in cluster NoKubernetes-152600

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 23:41:21.185777    4328 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-152600 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-152600 -n NoKubernetes-152600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-152600 -n NoKubernetes-152600: exit status 7 (268.1405ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 23:46:20.761546    6636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-152600" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (299.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (674.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.2167964251.exe start -p stopped-upgrade-682800 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:196: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.2167964251.exe start -p stopped-upgrade-682800 --memory=2200 --vm-driver=hyperv: (4m59.1128624s)
version_upgrade_test.go:205: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.2167964251.exe -p stopped-upgrade-682800 stop
version_upgrade_test.go:205: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.2167964251.exe -p stopped-upgrade-682800 stop: (26.5258427s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-682800 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E1226 23:58:36.120007   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 23:59:01.496257   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 23:59:08.831190   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p stopped-upgrade-682800 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (5m48.1778641s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-682800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the hyperv driver based on existing profile
	* Starting control plane node stopped-upgrade-682800 in cluster stopped-upgrade-682800
	* Restarting existing hyperv VM for "stopped-upgrade-682800" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 23:58:00.709196   11032 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1226 23:58:00.806059   11032 out.go:296] Setting OutFile to fd 1224 ...
	I1226 23:58:00.806875   11032 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 23:58:00.806875   11032 out.go:309] Setting ErrFile to fd 2032...
	I1226 23:58:00.806875   11032 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 23:58:00.834884   11032 out.go:303] Setting JSON to false
	I1226 23:58:00.838512   11032 start.go:128] hostinfo: {"hostname":"minikube1","uptime":9479,"bootTime":1703625601,"procs":207,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1226 23:58:00.838512   11032 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 23:58:00.851548   11032 out.go:177] * [stopped-upgrade-682800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1226 23:58:00.905226   11032 notify.go:220] Checking for updates...
	I1226 23:58:01.002297   11032 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 23:58:01.257735   11032 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 23:58:01.592026   11032 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1226 23:58:01.805376   11032 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 23:58:02.048842   11032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 23:58:02.210995   11032 config.go:182] Loaded profile config "stopped-upgrade-682800": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1226 23:58:02.210995   11032 start_flags.go:694] config upgrade: Driver=hyperv
	I1226 23:58:02.210995   11032 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I1226 23:58:02.211268   11032 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\stopped-upgrade-682800\config.json ...
	I1226 23:58:02.352082   11032 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1226 23:58:02.408471   11032 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 23:58:09.042852   11032 out.go:177] * Using the hyperv driver based on existing profile
	I1226 23:58:09.056773   11032 start.go:298] selected driver: hyperv
	I1226 23:58:09.056773   11032 start.go:902] validating driver "hyperv" against &{Name:stopped-upgrade-682800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0
ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.21.182.140 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1226 23:58:09.059626   11032 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 23:58:09.121130   11032 cni.go:84] Creating CNI manager for ""
	I1226 23:58:09.121206   11032 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1226 23:58:09.121258   11032 start_flags.go:323] config:
	{Name:stopped-upgrade-682800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.21.182.140 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1226 23:58:09.121754   11032 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:58:09.454847   11032 out.go:177] * Starting control plane node stopped-upgrade-682800 in cluster stopped-upgrade-682800
	I1226 23:58:09.566416   11032 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W1226 23:58:09.609690   11032 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1226 23:58:09.616117   11032 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\stopped-upgrade-682800\config.json ...
	I1226 23:58:09.616117   11032 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1226 23:58:09.616117   11032 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I1226 23:58:09.616236   11032 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I1226 23:58:09.616236   11032 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I1226 23:58:09.616373   11032 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I1226 23:58:09.616600   11032 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	I1226 23:58:09.616661   11032 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I1226 23:58:09.616661   11032 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I1226 23:58:09.620305   11032 start.go:365] acquiring machines lock for stopped-upgrade-682800: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1226 23:58:09.877571   11032 cache.go:107] acquiring lock: {Name:mk7a50c4bf2c20bec1fff9de3ac74780139c1c4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:58:09.877571   11032 cache.go:107] acquiring lock: {Name:mkbbc88bc55edd0ef8bd1c53673fe74e0129caa1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:58:09.877571   11032 cache.go:107] acquiring lock: {Name:mkcd99a49ef11cbbf53d95904dadb7eadb7e30f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:58:09.878184   11032 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 exists
	I1226 23:58:09.878230   11032 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 exists
	I1226 23:58:09.878184   11032 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 exists
	I1226 23:58:09.878270   11032 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.1" took 261.9692ms
	I1226 23:58:09.878270   11032 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 succeeded
	I1226 23:58:09.878270   11032 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.17.0" took 261.2706ms
	I1226 23:58:09.878270   11032 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.17.0" took 261.8977ms
	I1226 23:58:09.878270   11032 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 succeeded
	I1226 23:58:09.878270   11032 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 succeeded
	I1226 23:58:09.888400   11032 cache.go:107] acquiring lock: {Name:mk4e8ee16ba5b475b341c78282e92381b8584a70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:58:09.888400   11032 cache.go:107] acquiring lock: {Name:mk67b634fe9a890edc5195da54a2f3093e0c8f30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:58:09.888400   11032 cache.go:107] acquiring lock: {Name:mka7be082bbc64a256cc388eda31b6c9edba386f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:58:09.888400   11032 cache.go:107] acquiring lock: {Name:mkf253ced278c18e0b579f9f5e07f6a2fe7db678 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:58:09.888400   11032 cache.go:107] acquiring lock: {Name:mk69342e4f48cfcf5669830048d73215a892bfa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 23:58:09.888400   11032 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 exists
	I1226 23:58:09.888400   11032 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1226 23:58:09.888400   11032 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 exists
	I1226 23:58:09.888400   11032 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 exists
	I1226 23:58:09.888400   11032 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 272.2233ms
	I1226 23:58:09.888400   11032 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1226 23:58:09.888400   11032 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.17.0" took 271.5573ms
	I1226 23:58:09.888960   11032 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 succeeded
	I1226 23:58:09.888400   11032 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 exists
	I1226 23:58:09.889286   11032 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.17.0" took 273.1088ms
	I1226 23:58:09.888960   11032 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns_1.6.5" took 272.1703ms
	I1226 23:58:09.889351   11032 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 succeeded
	I1226 23:58:09.888400   11032 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.4.3-0" took 271.7394ms
	I1226 23:58:09.889351   11032 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 succeeded
	I1226 23:58:09.889351   11032 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 succeeded
	I1226 23:58:09.889351   11032 cache.go:87] Successfully saved all images to host disk.
	I1227 00:01:41.741352   11032 start.go:369] acquired machines lock for "stopped-upgrade-682800" in 3m32.1210065s
	I1227 00:01:41.741604   11032 start.go:96] Skipping create...Using existing machine configuration
	I1227 00:01:41.741692   11032 fix.go:54] fixHost starting: minikube
	I1227 00:01:41.742430   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:01:43.950565   11032 main.go:141] libmachine: [stdout =====>] : Off
	
	I1227 00:01:43.950565   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:01:43.950565   11032 fix.go:102] recreateIfNeeded on stopped-upgrade-682800: state=Stopped err=<nil>
	W1227 00:01:43.950565   11032 fix.go:128] unexpected machine state, will restart: <nil>
	I1227 00:01:43.956095   11032 out.go:177] * Restarting existing hyperv VM for "stopped-upgrade-682800" ...
	I1227 00:01:43.958493   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM stopped-upgrade-682800
	I1227 00:01:47.119129   11032 main.go:141] libmachine: [stdout =====>] : 
	I1227 00:01:47.119129   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:01:47.119129   11032 main.go:141] libmachine: Waiting for host to start...
	I1227 00:01:47.119129   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:01:49.630645   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:01:49.630880   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:01:49.630975   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:01:52.315128   11032 main.go:141] libmachine: [stdout =====>] : 
	I1227 00:01:52.315128   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:01:53.318502   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:01:55.552980   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:01:55.553246   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:01:55.553319   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:01:58.276989   11032 main.go:141] libmachine: [stdout =====>] : 
	I1227 00:01:58.276989   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:01:59.289247   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:02:01.606731   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:02:01.606781   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:01.606781   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:02:04.289991   11032 main.go:141] libmachine: [stdout =====>] : 
	I1227 00:02:04.289991   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:05.305521   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:02:07.577762   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:02:07.577762   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:07.577762   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:02:10.196951   11032 main.go:141] libmachine: [stdout =====>] : 
	I1227 00:02:10.197113   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:11.213592   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:02:13.485069   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:02:13.485069   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:13.485069   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:02:16.074864   11032 main.go:141] libmachine: [stdout =====>] : 
	I1227 00:02:16.074925   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:17.090429   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:02:19.373647   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:02:19.373647   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:19.373647   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:02:21.988055   11032 main.go:141] libmachine: [stdout =====>] : 
	I1227 00:02:21.988329   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:23.001443   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:02:25.315089   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:02:25.315089   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:25.315089   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:02:27.973306   11032 main.go:141] libmachine: [stdout =====>] : 
	I1227 00:02:27.973393   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:28.984904   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:02:31.285069   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:02:31.285069   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:31.285069   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:02:33.948714   11032 main.go:141] libmachine: [stdout =====>] : 172.21.182.140
	
	I1227 00:02:33.948750   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:33.952136   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:02:36.147977   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:02:36.148079   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:36.148079   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:02:38.790449   11032 main.go:141] libmachine: [stdout =====>] : 172.21.182.140
	
	I1227 00:02:38.790449   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:38.790620   11032 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\stopped-upgrade-682800\config.json ...
	I1227 00:02:38.793861   11032 machine.go:88] provisioning docker machine ...
	I1227 00:02:38.793861   11032 buildroot.go:166] provisioning hostname "stopped-upgrade-682800"
	I1227 00:02:38.793949   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:02:40.977761   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:02:40.977990   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:40.977990   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:02:43.647721   11032 main.go:141] libmachine: [stdout =====>] : 172.21.182.140
	
	I1227 00:02:43.647721   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:43.651717   11032 main.go:141] libmachine: Using SSH client type: native
	I1227 00:02:43.652716   11032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.140 22 <nil> <nil>}
	I1227 00:02:43.652716   11032 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-682800 && echo "stopped-upgrade-682800" | sudo tee /etc/hostname
	I1227 00:02:43.790041   11032 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-682800
	
	I1227 00:02:43.790041   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:02:46.077888   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:02:46.077888   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:46.077888   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:02:48.759430   11032 main.go:141] libmachine: [stdout =====>] : 172.21.182.140
	
	I1227 00:02:48.759430   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:48.765480   11032 main.go:141] libmachine: Using SSH client type: native
	I1227 00:02:48.766248   11032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.140 22 <nil> <nil>}
	I1227 00:02:48.766277   11032 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-682800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-682800/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-682800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 00:02:48.910750   11032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1227 00:02:48.910865   11032 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1227 00:02:48.910865   11032 buildroot.go:174] setting up certificates
	I1227 00:02:48.910865   11032 provision.go:83] configureAuth start
	I1227 00:02:48.911018   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:02:51.152107   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:02:51.152392   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:51.152392   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:02:54.248769   11032 main.go:141] libmachine: [stdout =====>] : 172.21.182.140
	
	I1227 00:02:54.248833   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:54.248833   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:02:56.441574   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:02:56.441638   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:56.441638   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:02:59.034640   11032 main.go:141] libmachine: [stdout =====>] : 172.21.182.140
	
	I1227 00:02:59.034640   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:02:59.034811   11032 provision.go:138] copyHostCerts
	I1227 00:02:59.034892   11032 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1227 00:02:59.034892   11032 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1227 00:02:59.035726   11032 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1227 00:02:59.036993   11032 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1227 00:02:59.036993   11032 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1227 00:02:59.036993   11032 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1227 00:02:59.038299   11032 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1227 00:02:59.038299   11032 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1227 00:02:59.039125   11032 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I1227 00:02:59.040088   11032 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.stopped-upgrade-682800 san=[172.21.182.140 172.21.182.140 localhost 127.0.0.1 minikube stopped-upgrade-682800]
	I1227 00:02:59.449842   11032 provision.go:172] copyRemoteCerts
	I1227 00:02:59.461789   11032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 00:02:59.461789   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:03:01.651212   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:03:01.651212   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:01.651328   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:03:04.252994   11032 main.go:141] libmachine: [stdout =====>] : 172.21.182.140
	
	I1227 00:03:04.253075   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:04.253075   11032 sshutil.go:53] new ssh client: &{IP:172.21.182.140 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\stopped-upgrade-682800\id_rsa Username:docker}
	I1227 00:03:04.357197   11032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8954096s)
	I1227 00:03:04.357789   11032 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 00:03:04.376888   11032 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 00:03:04.395604   11032 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 00:03:04.413427   11032 provision.go:86] duration metric: configureAuth took 15.5023831s
	I1227 00:03:04.413427   11032 buildroot.go:189] setting minikube options for container-runtime
	I1227 00:03:04.414024   11032 config.go:182] Loaded profile config "stopped-upgrade-682800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1227 00:03:04.414138   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:03:06.545740   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:03:06.545740   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:06.545740   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:03:09.150615   11032 main.go:141] libmachine: [stdout =====>] : 172.21.182.140
	
	I1227 00:03:09.150747   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:09.157051   11032 main.go:141] libmachine: Using SSH client type: native
	I1227 00:03:09.157853   11032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.140 22 <nil> <nil>}
	I1227 00:03:09.157853   11032 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 00:03:09.299187   11032 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1227 00:03:09.299187   11032 buildroot.go:70] root file system type: tmpfs
	I1227 00:03:09.299187   11032 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 00:03:09.299187   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:03:11.457832   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:03:11.457923   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:11.457923   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:03:14.087104   11032 main.go:141] libmachine: [stdout =====>] : 172.21.182.140
	
	I1227 00:03:14.087172   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:14.092670   11032 main.go:141] libmachine: Using SSH client type: native
	I1227 00:03:14.093401   11032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.140 22 <nil> <nil>}
	I1227 00:03:14.093988   11032 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 00:03:14.243863   11032 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 00:03:14.243930   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:03:16.402224   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:03:16.402420   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:16.402420   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:03:19.059715   11032 main.go:141] libmachine: [stdout =====>] : 172.21.182.140
	
	I1227 00:03:19.059715   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:19.065780   11032 main.go:141] libmachine: Using SSH client type: native
	I1227 00:03:19.066483   11032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.140 22 <nil> <nil>}
	I1227 00:03:19.066483   11032 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 00:03:20.482391   11032 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1227 00:03:20.482391   11032 machine.go:91] provisioned docker machine in 41.6885422s
	I1227 00:03:20.482391   11032 start.go:300] post-start starting for "stopped-upgrade-682800" (driver="hyperv")
	I1227 00:03:20.482391   11032 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 00:03:20.496487   11032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 00:03:20.496487   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:03:22.686733   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:03:22.686733   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:22.686862   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:03:25.299084   11032 main.go:141] libmachine: [stdout =====>] : 172.21.182.140
	
	I1227 00:03:25.299084   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:25.299377   11032 sshutil.go:53] new ssh client: &{IP:172.21.182.140 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\stopped-upgrade-682800\id_rsa Username:docker}
	I1227 00:03:25.403626   11032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9071411s)
	I1227 00:03:25.416759   11032 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 00:03:25.423635   11032 info.go:137] Remote host: Buildroot 2019.02.7
	I1227 00:03:25.423771   11032 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1227 00:03:25.423883   11032 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1227 00:03:25.425216   11032 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> 107282.pem in /etc/ssl/certs
	I1227 00:03:25.439660   11032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 00:03:25.450328   11032 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /etc/ssl/certs/107282.pem (1708 bytes)
	I1227 00:03:25.468466   11032 start.go:303] post-start completed in 4.9860778s
	I1227 00:03:25.468993   11032 fix.go:56] fixHost completed within 1m43.7273332s
	I1227 00:03:25.468993   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:03:27.684802   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:03:27.684802   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:27.684802   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:03:30.342481   11032 main.go:141] libmachine: [stdout =====>] : 172.21.182.140
	
	I1227 00:03:30.342819   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:30.349226   11032 main.go:141] libmachine: Using SSH client type: native
	I1227 00:03:30.349902   11032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.140 22 <nil> <nil>}
	I1227 00:03:30.349902   11032 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1227 00:03:30.474597   11032 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703635410.465968323
	
	I1227 00:03:30.474683   11032 fix.go:206] guest clock: 1703635410.465968323
	I1227 00:03:30.474742   11032 fix.go:219] Guest: 2023-12-27 00:03:30.465968323 +0000 UTC Remote: 2023-12-27 00:03:25.4689939 +0000 UTC m=+324.863366201 (delta=4.996974423s)
	I1227 00:03:30.474819   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:03:32.676727   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:03:32.676727   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:32.676727   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:03:35.288551   11032 main.go:141] libmachine: [stdout =====>] : 172.21.182.140
	
	I1227 00:03:35.288551   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:35.294601   11032 main.go:141] libmachine: Using SSH client type: native
	I1227 00:03:35.295369   11032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.182.140 22 <nil> <nil>}
	I1227 00:03:35.295369   11032 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1703635410
	I1227 00:03:35.422778   11032 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Dec 27 00:03:30 UTC 2023
	
	I1227 00:03:35.422778   11032 fix.go:226] clock set: Wed Dec 27 00:03:30 UTC 2023
	 (err=<nil>)
	I1227 00:03:35.422778   11032 start.go:83] releasing machines lock for "stopped-upgrade-682800", held for 1m53.6813281s
	I1227 00:03:35.422778   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:03:37.655767   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:03:37.655871   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:37.656009   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:03:40.303034   11032 main.go:141] libmachine: [stdout =====>] : 172.21.182.140
	
	I1227 00:03:40.303259   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:40.308636   11032 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 00:03:40.308727   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:03:40.319374   11032 ssh_runner.go:195] Run: cat /version.json
	I1227 00:03:40.319374   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-682800 ).state
	I1227 00:03:42.595645   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:03:42.595645   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:42.595772   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:03:42.641798   11032 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:03:42.641798   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:42.642005   11032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-682800 ).networkadapters[0]).ipaddresses[0]
	I1227 00:03:45.400407   11032 main.go:141] libmachine: [stdout =====>] : 172.21.182.140
	
	I1227 00:03:45.400407   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:45.400407   11032 sshutil.go:53] new ssh client: &{IP:172.21.182.140 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\stopped-upgrade-682800\id_rsa Username:docker}
	I1227 00:03:45.462295   11032 main.go:141] libmachine: [stdout =====>] : 172.21.182.140
	
	I1227 00:03:45.462373   11032 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:03:45.462552   11032 sshutil.go:53] new ssh client: &{IP:172.21.182.140 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\stopped-upgrade-682800\id_rsa Username:docker}
	I1227 00:03:45.500079   11032 ssh_runner.go:235] Completed: cat /version.json: (5.1807075s)
	W1227 00:03:45.500079   11032 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1227 00:03:45.512087   11032 ssh_runner.go:195] Run: systemctl --version
	I1227 00:03:46.243787   11032 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.9351532s)
	I1227 00:03:46.257718   11032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 00:03:46.266276   11032 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 00:03:46.280828   11032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1227 00:03:46.303409   11032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1227 00:03:46.312339   11032 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I1227 00:03:46.312425   11032 start.go:475] detecting cgroup driver to use...
	I1227 00:03:46.312773   11032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 00:03:46.344032   11032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1227 00:03:46.371534   11032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 00:03:46.382331   11032 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1227 00:03:46.404001   11032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1227 00:03:46.427527   11032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 00:03:46.452680   11032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 00:03:46.478674   11032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 00:03:46.501290   11032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 00:03:46.524879   11032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 00:03:46.549882   11032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 00:03:46.574202   11032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 00:03:46.598971   11032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 00:03:46.734177   11032 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 00:03:46.757389   11032 start.go:475] detecting cgroup driver to use...
	I1227 00:03:46.770210   11032 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 00:03:46.802117   11032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 00:03:46.834964   11032 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 00:03:47.026433   11032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 00:03:47.057394   11032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 00:03:47.075510   11032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 00:03:47.107427   11032 ssh_runner.go:195] Run: which cri-dockerd
	I1227 00:03:47.127384   11032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 00:03:47.136956   11032 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1227 00:03:47.165716   11032 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 00:03:47.301604   11032 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 00:03:47.426338   11032 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1227 00:03:47.426338   11032 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1227 00:03:47.455050   11032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 00:03:47.583281   11032 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 00:03:48.673525   11032 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.090244s)
	I1227 00:03:48.686609   11032 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1227 00:03:48.710827   11032 out.go:177] 
	W1227 00:03:48.713190   11032 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Wed 2023-12-27 00:02:26 UTC, end at Wed 2023-12-27 00:03:48 UTC. --
	Dec 27 00:03:19 stopped-upgrade-682800 systemd[1]: Starting Docker Application Container Engine...
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.515958419Z" level=info msg="Starting up"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.518968581Z" level=info msg="libcontainerd: started new containerd process" pid=2476
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.519026778Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.519039377Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.519065476Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.519083375Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.565892219Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.566621486Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.567319753Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.567800231Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.567917726Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.570016329Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.570143523Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.571047082Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.571873644Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.572287125Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.572403819Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.572435718Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.572445217Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.572453217Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.577114702Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.577236997Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.577309193Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.577767172Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.577869568Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.577890367Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.577912066Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.577980562Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.578003461Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.578016861Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.578115756Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.578292448Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.578987016Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579106411Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579153208Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579173607Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579185607Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579196506Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579206706Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579218805Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579229705Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579240004Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579250504Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579306701Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579321901Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579333600Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579344100Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579468594Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579616487Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579713983Z" level=info msg="containerd successfully booted in 0.016660s"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.595010878Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.595150272Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.595266766Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.595292865Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.601089398Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.601188593Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.601213192Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.601224392Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.619619944Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.807459892Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.807572687Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.807587986Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.807595186Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.807602486Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.807609585Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.807978568Z" level=info msg="Loading containers: start."
	Dec 27 00:03:20 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:20.275674518Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 27 00:03:20 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:20.391744805Z" level=info msg="Loading containers: done."
	Dec 27 00:03:20 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:20.431310597Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 27 00:03:20 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:20.432862830Z" level=info msg="Daemon has completed initialization"
	Dec 27 00:03:20 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:20.472856803Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 27 00:03:20 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:20.472923600Z" level=info msg="API listen on [::]:2376"
	Dec 27 00:03:20 stopped-upgrade-682800 systemd[1]: Started Docker Application Container Engine.
	Dec 27 00:03:47 stopped-upgrade-682800 systemd[1]: Stopping Docker Application Container Engine...
	Dec 27 00:03:47 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:47.585821259Z" level=info msg="Processing signal 'terminated'"
	Dec 27 00:03:47 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:47.586917459Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 27 00:03:47 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:47.587194559Z" level=info msg="Daemon shutdown complete"
	Dec 27 00:03:47 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:47.587342059Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 27 00:03:47 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:47.587373659Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 27 00:03:48 stopped-upgrade-682800 systemd[1]: docker.service: Succeeded.
	Dec 27 00:03:48 stopped-upgrade-682800 systemd[1]: Stopped Docker Application Container Engine.
	Dec 27 00:03:48 stopped-upgrade-682800 systemd[1]: Starting Docker Application Container Engine...
	Dec 27 00:03:48 stopped-upgrade-682800 dockerd[2918]: time="2023-12-27T00:03:48.653591259Z" level=info msg="Starting up"
	Dec 27 00:03:48 stopped-upgrade-682800 dockerd[2918]: time="2023-12-27T00:03:48.657413359Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 27 00:03:48 stopped-upgrade-682800 dockerd[2918]: time="2023-12-27T00:03:48.657523259Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 27 00:03:48 stopped-upgrade-682800 dockerd[2918]: time="2023-12-27T00:03:48.657552859Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 27 00:03:48 stopped-upgrade-682800 dockerd[2918]: time="2023-12-27T00:03:48.657576359Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 27 00:03:48 stopped-upgrade-682800 dockerd[2918]: time="2023-12-27T00:03:48.657825659Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Dec 27 00:03:48 stopped-upgrade-682800 dockerd[2918]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Dec 27 00:03:48 stopped-upgrade-682800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 00:03:48 stopped-upgrade-682800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 27 00:03:48 stopped-upgrade-682800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Wed 2023-12-27 00:02:26 UTC, end at Wed 2023-12-27 00:03:48 UTC. --
	Dec 27 00:03:19 stopped-upgrade-682800 systemd[1]: Starting Docker Application Container Engine...
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.515958419Z" level=info msg="Starting up"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.518968581Z" level=info msg="libcontainerd: started new containerd process" pid=2476
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.519026778Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.519039377Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.519065476Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.519083375Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.565892219Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.566621486Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.567319753Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.567800231Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.567917726Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.570016329Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.570143523Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.571047082Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.571873644Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.572287125Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.572403819Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.572435718Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.572445217Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.572453217Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.577114702Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.577236997Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.577309193Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.577767172Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.577869568Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.577890367Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.577912066Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.577980562Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.578003461Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.578016861Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.578115756Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.578292448Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.578987016Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579106411Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579153208Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579173607Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579185607Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579196506Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579206706Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579218805Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579229705Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579240004Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579250504Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579306701Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579321901Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579333600Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579344100Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579468594Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579616487Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.579713983Z" level=info msg="containerd successfully booted in 0.016660s"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.595010878Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.595150272Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.595266766Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.595292865Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.601089398Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.601188593Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.601213192Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.601224392Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.619619944Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.807459892Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.807572687Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.807587986Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.807595186Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.807602486Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.807609585Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 27 00:03:19 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:19.807978568Z" level=info msg="Loading containers: start."
	Dec 27 00:03:20 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:20.275674518Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 27 00:03:20 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:20.391744805Z" level=info msg="Loading containers: done."
	Dec 27 00:03:20 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:20.431310597Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 27 00:03:20 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:20.432862830Z" level=info msg="Daemon has completed initialization"
	Dec 27 00:03:20 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:20.472856803Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 27 00:03:20 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:20.472923600Z" level=info msg="API listen on [::]:2376"
	Dec 27 00:03:20 stopped-upgrade-682800 systemd[1]: Started Docker Application Container Engine.
	Dec 27 00:03:47 stopped-upgrade-682800 systemd[1]: Stopping Docker Application Container Engine...
	Dec 27 00:03:47 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:47.585821259Z" level=info msg="Processing signal 'terminated'"
	Dec 27 00:03:47 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:47.586917459Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 27 00:03:47 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:47.587194559Z" level=info msg="Daemon shutdown complete"
	Dec 27 00:03:47 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:47.587342059Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 27 00:03:47 stopped-upgrade-682800 dockerd[2468]: time="2023-12-27T00:03:47.587373659Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 27 00:03:48 stopped-upgrade-682800 systemd[1]: docker.service: Succeeded.
	Dec 27 00:03:48 stopped-upgrade-682800 systemd[1]: Stopped Docker Application Container Engine.
	Dec 27 00:03:48 stopped-upgrade-682800 systemd[1]: Starting Docker Application Container Engine...
	Dec 27 00:03:48 stopped-upgrade-682800 dockerd[2918]: time="2023-12-27T00:03:48.653591259Z" level=info msg="Starting up"
	Dec 27 00:03:48 stopped-upgrade-682800 dockerd[2918]: time="2023-12-27T00:03:48.657413359Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 27 00:03:48 stopped-upgrade-682800 dockerd[2918]: time="2023-12-27T00:03:48.657523259Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 27 00:03:48 stopped-upgrade-682800 dockerd[2918]: time="2023-12-27T00:03:48.657552859Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 27 00:03:48 stopped-upgrade-682800 dockerd[2918]: time="2023-12-27T00:03:48.657576359Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 27 00:03:48 stopped-upgrade-682800 dockerd[2918]: time="2023-12-27T00:03:48.657825659Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Dec 27 00:03:48 stopped-upgrade-682800 dockerd[2918]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Dec 27 00:03:48 stopped-upgrade-682800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 00:03:48 stopped-upgrade-682800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 27 00:03:48 stopped-upgrade-682800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1227 00:03:48.714150   11032 out.go:239] * 
	* 
	W1227 00:03:48.716062   11032 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 00:03:48.718887   11032 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-windows-amd64.exe start -p stopped-upgrade-682800 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (674.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (482.26s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-178300 --alsologtostderr -v=1 --driver=hyperv
E1227 00:03:36.125799   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1227 00:03:44.741380   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-178300 --alsologtostderr -v=1 --driver=hyperv: (6m48.8533288s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-178300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node pause-178300 in cluster pause-178300
	* Updating the running hyperv "pause-178300" VM ...
	* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-178300" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	W1227 00:02:49.135619    9096 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1227 00:02:49.212474    9096 out.go:296] Setting OutFile to fd 1312 ...
	I1227 00:02:49.213002    9096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1227 00:02:49.213002    9096 out.go:309] Setting ErrFile to fd 1864...
	I1227 00:02:49.213002    9096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1227 00:02:49.243124    9096 out.go:303] Setting JSON to false
	I1227 00:02:49.249950    9096 start.go:128] hostinfo: {"hostname":"minikube1","uptime":9768,"bootTime":1703625601,"procs":211,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1227 00:02:49.250165    9096 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1227 00:02:49.253389    9096 out.go:177] * [pause-178300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1227 00:02:49.256718    9096 notify.go:220] Checking for updates...
	I1227 00:02:49.258678    9096 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1227 00:02:49.261427    9096 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 00:02:49.267338    9096 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1227 00:02:49.270031    9096 out.go:177]   - MINIKUBE_LOCATION=17857
	I1227 00:02:49.273102    9096 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 00:02:49.277275    9096 config.go:182] Loaded profile config "pause-178300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1227 00:02:49.278575    9096 driver.go:392] Setting default libvirt URI to qemu:///system
	I1227 00:02:54.868999    9096 out.go:177] * Using the hyperv driver based on existing profile
	I1227 00:02:54.872889    9096 start.go:298] selected driver: hyperv
	I1227 00:02:54.872889    9096 start.go:902] validating driver "hyperv" against &{Name:pause-178300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:pause-178300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.21.179.115 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver
-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1227 00:02:54.872889    9096 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 00:02:54.925607    9096 cni.go:84] Creating CNI manager for ""
	I1227 00:02:54.925687    9096 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 00:02:54.925774    9096 start_flags.go:323] config:
	{Name:pause-178300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-178300 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.21.179.115 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fa
lse portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1227 00:02:54.926363    9096 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 00:02:54.932124    9096 out.go:177] * Starting control plane node pause-178300 in cluster pause-178300
	I1227 00:02:54.934489    9096 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1227 00:02:54.934489    9096 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1227 00:02:54.934489    9096 cache.go:56] Caching tarball of preloaded images
	I1227 00:02:54.935234    9096 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1227 00:02:54.935234    9096 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1227 00:02:54.935855    9096 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-178300\config.json ...
	I1227 00:02:54.938655    9096 start.go:365] acquiring machines lock for pause-178300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1227 00:06:49.120377    9096 start.go:369] acquired machines lock for "pause-178300" in 3m54.1817975s
	I1227 00:06:49.120377    9096 start.go:96] Skipping create...Using existing machine configuration
	I1227 00:06:49.120377    9096 fix.go:54] fixHost starting: 
	I1227 00:06:49.121971    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-178300 ).state
	I1227 00:06:51.498541    9096 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:06:51.498827    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:06:51.498827    9096 fix.go:102] recreateIfNeeded on pause-178300: state=Running err=<nil>
	W1227 00:06:51.498936    9096 fix.go:128] unexpected machine state, will restart: <nil>
	I1227 00:06:51.501874    9096 out.go:177] * Updating the running hyperv "pause-178300" VM ...
	I1227 00:06:51.505475    9096 machine.go:88] provisioning docker machine ...
	I1227 00:06:51.505551    9096 buildroot.go:166] provisioning hostname "pause-178300"
	I1227 00:06:51.505599    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-178300 ).state
	I1227 00:06:54.188241    9096 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:06:54.188373    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:06:54.188418    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-178300 ).networkadapters[0]).ipaddresses[0]
	I1227 00:06:57.329051    9096 main.go:141] libmachine: [stdout =====>] : 172.21.179.115
	
	I1227 00:06:57.329131    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:06:57.335371    9096 main.go:141] libmachine: Using SSH client type: native
	I1227 00:06:57.336027    9096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.179.115 22 <nil> <nil>}
	I1227 00:06:57.336027    9096 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-178300 && echo "pause-178300" | sudo tee /etc/hostname
	I1227 00:06:57.534888    9096 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-178300
	
	I1227 00:06:57.535006    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-178300 ).state
	I1227 00:07:00.076797    9096 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:07:00.076877    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:00.076877    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-178300 ).networkadapters[0]).ipaddresses[0]
	I1227 00:07:02.672997    9096 main.go:141] libmachine: [stdout =====>] : 172.21.179.115
	
	I1227 00:07:02.672997    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:02.677975    9096 main.go:141] libmachine: Using SSH client type: native
	I1227 00:07:02.677975    9096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.179.115 22 <nil> <nil>}
	I1227 00:07:02.677975    9096 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-178300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-178300/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-178300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 00:07:02.819856    9096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1227 00:07:02.819856    9096 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1227 00:07:02.819856    9096 buildroot.go:174] setting up certificates
	I1227 00:07:02.819856    9096 provision.go:83] configureAuth start
	I1227 00:07:02.819856    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-178300 ).state
	I1227 00:07:05.094458    9096 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:07:05.094562    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:05.094562    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-178300 ).networkadapters[0]).ipaddresses[0]
	I1227 00:07:07.820983    9096 main.go:141] libmachine: [stdout =====>] : 172.21.179.115
	
	I1227 00:07:07.820983    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:07.821067    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-178300 ).state
	I1227 00:07:10.043963    9096 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:07:10.044314    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:10.044314    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-178300 ).networkadapters[0]).ipaddresses[0]
	I1227 00:07:12.777007    9096 main.go:141] libmachine: [stdout =====>] : 172.21.179.115
	
	I1227 00:07:12.777007    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:12.777165    9096 provision.go:138] copyHostCerts
	I1227 00:07:12.777229    9096 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1227 00:07:12.777229    9096 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1227 00:07:12.778152    9096 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1227 00:07:12.779532    9096 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1227 00:07:12.779532    9096 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1227 00:07:12.779532    9096 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1227 00:07:12.781642    9096 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1227 00:07:12.781642    9096 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1227 00:07:12.781642    9096 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I1227 00:07:12.783304    9096 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-178300 san=[172.21.179.115 172.21.179.115 localhost 127.0.0.1 minikube pause-178300]
	I1227 00:07:12.940387    9096 provision.go:172] copyRemoteCerts
	I1227 00:07:12.952788    9096 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 00:07:12.952972    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-178300 ).state
	I1227 00:07:15.283876    9096 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:07:15.284009    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:15.284009    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-178300 ).networkadapters[0]).ipaddresses[0]
	I1227 00:07:17.974568    9096 main.go:141] libmachine: [stdout =====>] : 172.21.179.115
	
	I1227 00:07:17.974568    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:17.974850    9096 sshutil.go:53] new ssh client: &{IP:172.21.179.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-178300\id_rsa Username:docker}
	I1227 00:07:18.103250    9096 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1501625s)
	I1227 00:07:18.103976    9096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 00:07:18.152387    9096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1227 00:07:18.209830    9096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 00:07:18.255485    9096 provision.go:86] duration metric: configureAuth took 15.4355628s
	I1227 00:07:18.255551    9096 buildroot.go:189] setting minikube options for container-runtime
	I1227 00:07:18.255709    9096 config.go:182] Loaded profile config "pause-178300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1227 00:07:18.256238    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-178300 ).state
	I1227 00:07:20.486403    9096 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:07:20.486588    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:20.486588    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-178300 ).networkadapters[0]).ipaddresses[0]
	I1227 00:07:23.138306    9096 main.go:141] libmachine: [stdout =====>] : 172.21.179.115
	
	I1227 00:07:23.138399    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:23.144455    9096 main.go:141] libmachine: Using SSH client type: native
	I1227 00:07:23.145094    9096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.179.115 22 <nil> <nil>}
	I1227 00:07:23.145246    9096 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 00:07:23.291077    9096 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1227 00:07:23.291077    9096 buildroot.go:70] root file system type: tmpfs
	I1227 00:07:23.291077    9096 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 00:07:23.291077    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-178300 ).state
	I1227 00:07:25.474958    9096 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:07:25.475018    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:25.475096    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-178300 ).networkadapters[0]).ipaddresses[0]
	I1227 00:07:28.253187    9096 main.go:141] libmachine: [stdout =====>] : 172.21.179.115
	
	I1227 00:07:28.253379    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:28.259849    9096 main.go:141] libmachine: Using SSH client type: native
	I1227 00:07:28.260648    9096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.179.115 22 <nil> <nil>}
	I1227 00:07:28.260728    9096 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 00:07:28.464619    9096 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 00:07:28.464796    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-178300 ).state
	I1227 00:07:30.741328    9096 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:07:30.741452    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:30.741452    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-178300 ).networkadapters[0]).ipaddresses[0]
	I1227 00:07:33.424399    9096 main.go:141] libmachine: [stdout =====>] : 172.21.179.115
	
	I1227 00:07:33.424594    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:33.430222    9096 main.go:141] libmachine: Using SSH client type: native
	I1227 00:07:33.431150    9096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.179.115 22 <nil> <nil>}
	I1227 00:07:33.431150    9096 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 00:07:33.584278    9096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1227 00:07:33.584278    9096 machine.go:91] provisioned docker machine in 42.0788256s
	I1227 00:07:33.584278    9096 start.go:300] post-start starting for "pause-178300" (driver="hyperv")
	I1227 00:07:33.584278    9096 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 00:07:33.598957    9096 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 00:07:33.598957    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-178300 ).state
	I1227 00:07:35.839245    9096 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:07:35.839280    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:35.839360    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-178300 ).networkadapters[0]).ipaddresses[0]
	I1227 00:07:38.557701    9096 main.go:141] libmachine: [stdout =====>] : 172.21.179.115
	
	I1227 00:07:38.557701    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:38.557993    9096 sshutil.go:53] new ssh client: &{IP:172.21.179.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-178300\id_rsa Username:docker}
	I1227 00:07:38.709264    9096 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1103083s)
	I1227 00:07:38.725768    9096 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 00:07:38.732889    9096 info.go:137] Remote host: Buildroot 2021.02.12
	I1227 00:07:38.733519    9096 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1227 00:07:38.733519    9096 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1227 00:07:38.736008    9096 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> 107282.pem in /etc/ssl/certs
	I1227 00:07:38.756968    9096 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 00:07:38.776760    9096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /etc/ssl/certs/107282.pem (1708 bytes)
	I1227 00:07:38.826655    9096 start.go:303] post-start completed in 5.2422791s
	I1227 00:07:38.826655    9096 fix.go:56] fixHost completed within 49.7063053s
	I1227 00:07:38.826655    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-178300 ).state
	I1227 00:07:41.068606    9096 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:07:41.068778    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:41.068843    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-178300 ).networkadapters[0]).ipaddresses[0]
	I1227 00:07:43.735075    9096 main.go:141] libmachine: [stdout =====>] : 172.21.179.115
	
	I1227 00:07:43.735310    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:43.741819    9096 main.go:141] libmachine: Using SSH client type: native
	I1227 00:07:43.742709    9096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.179.115 22 <nil> <nil>}
	I1227 00:07:43.742709    9096 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1227 00:07:43.882663    9096 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703635663.880298230
	
	I1227 00:07:43.882663    9096 fix.go:206] guest clock: 1703635663.880298230
	I1227 00:07:43.882663    9096 fix.go:219] Guest: 2023-12-27 00:07:43.88029823 +0000 UTC Remote: 2023-12-27 00:07:38.8266556 +0000 UTC m=+289.793755201 (delta=5.05364263s)
	I1227 00:07:43.882663    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-178300 ).state
	I1227 00:07:46.165033    9096 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:07:46.165098    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:46.165188    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-178300 ).networkadapters[0]).ipaddresses[0]
	I1227 00:07:48.921656    9096 main.go:141] libmachine: [stdout =====>] : 172.21.179.115
	
	I1227 00:07:48.921656    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:48.928583    9096 main.go:141] libmachine: Using SSH client type: native
	I1227 00:07:48.929801    9096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.179.115 22 <nil> <nil>}
	I1227 00:07:48.929801    9096 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1703635663
	I1227 00:07:49.082667    9096 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Dec 27 00:07:43 UTC 2023
	
	I1227 00:07:49.082800    9096 fix.go:226] clock set: Wed Dec 27 00:07:43 UTC 2023
	 (err=<nil>)
	I1227 00:07:49.082800    9096 start.go:83] releasing machines lock for "pause-178300", held for 59.9624534s
	I1227 00:07:49.083049    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-178300 ).state
	I1227 00:07:51.512815    9096 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:07:51.512815    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:51.512904    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-178300 ).networkadapters[0]).ipaddresses[0]
	I1227 00:07:54.732355    9096 main.go:141] libmachine: [stdout =====>] : 172.21.179.115
	
	I1227 00:07:54.732355    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:54.736597    9096 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 00:07:54.736664    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-178300 ).state
	I1227 00:07:54.764826    9096 ssh_runner.go:195] Run: cat /version.json
	I1227 00:07:54.764826    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-178300 ).state
	I1227 00:07:57.102382    9096 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:07:57.102701    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:57.102701    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-178300 ).networkadapters[0]).ipaddresses[0]
	I1227 00:07:57.149805    9096 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:07:57.149805    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:07:57.149805    9096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-178300 ).networkadapters[0]).ipaddresses[0]
	I1227 00:08:00.383648    9096 main.go:141] libmachine: [stdout =====>] : 172.21.179.115
	
	I1227 00:08:00.383899    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:08:00.384216    9096 sshutil.go:53] new ssh client: &{IP:172.21.179.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-178300\id_rsa Username:docker}
	I1227 00:08:00.410319    9096 main.go:141] libmachine: [stdout =====>] : 172.21.179.115
	
	I1227 00:08:00.410319    9096 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:08:00.410319    9096 sshutil.go:53] new ssh client: &{IP:172.21.179.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-178300\id_rsa Username:docker}
	I1227 00:08:10.538387    9096 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (15.8017954s)
	W1227 00:08:10.538501    9096 start.go:843] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	I1227 00:08:10.538593    9096 ssh_runner.go:235] Completed: cat /version.json: (15.7736794s)
	W1227 00:08:10.538742    9096 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	! This VM is having trouble accessing https://registry.k8s.io
	W1227 00:08:10.538742    9096 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1227 00:08:10.565591    9096 ssh_runner.go:195] Run: systemctl --version
	I1227 00:08:10.591541    9096 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 00:08:10.602906    9096 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 00:08:10.616514    9096 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 00:08:10.638154    9096 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 00:08:10.638154    9096 start.go:475] detecting cgroup driver to use...
	I1227 00:08:10.638720    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 00:08:10.702967    9096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1227 00:08:10.744025    9096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 00:08:10.762027    9096 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1227 00:08:10.777210    9096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1227 00:08:10.810943    9096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 00:08:10.844662    9096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 00:08:10.876917    9096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 00:08:10.910542    9096 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 00:08:10.947467    9096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 00:08:10.981606    9096 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 00:08:11.015541    9096 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 00:08:11.051353    9096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 00:08:11.298559    9096 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 00:08:11.334940    9096 start.go:475] detecting cgroup driver to use...
	I1227 00:08:11.358601    9096 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 00:08:11.394582    9096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 00:08:11.430155    9096 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 00:08:11.482143    9096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 00:08:11.526403    9096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 00:08:11.550977    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 00:08:11.599493    9096 ssh_runner.go:195] Run: which cri-dockerd
	I1227 00:08:11.619189    9096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 00:08:11.636930    9096 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1227 00:08:11.688178    9096 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 00:08:11.974055    9096 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 00:08:12.210533    9096 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1227 00:08:12.210533    9096 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1227 00:08:12.264639    9096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 00:08:12.537797    9096 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 00:08:24.540286    9096 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.0024929s)
	I1227 00:08:24.553179    9096 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 00:08:24.773287    9096 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1227 00:08:24.963976    9096 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 00:08:25.157982    9096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 00:08:25.363683    9096 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1227 00:08:25.423121    9096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 00:08:25.646103    9096 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1227 00:08:25.790523    9096 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1227 00:08:25.803017    9096 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1227 00:08:25.813047    9096 start.go:543] Will wait 60s for crictl version
	I1227 00:08:25.826011    9096 ssh_runner.go:195] Run: which crictl
	I1227 00:08:25.850491    9096 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1227 00:08:25.929781    9096 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1227 00:08:25.940361    9096 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 00:08:25.997130    9096 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 00:08:26.039951    9096 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1227 00:08:26.039951    9096 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1227 00:08:26.043990    9096 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1227 00:08:26.043990    9096 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1227 00:08:26.043990    9096 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1227 00:08:26.044951    9096 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4e:ec:d4 Flags:up|broadcast|multicast|running}
	I1227 00:08:26.047956    9096 ip.go:210] interface addr: fe80::1f69:6bdb:2000:8fcd/64
	I1227 00:08:26.047956    9096 ip.go:210] interface addr: 172.21.176.1/20
	I1227 00:08:26.061948    9096 ssh_runner.go:195] Run: grep 172.21.176.1	host.minikube.internal$ /etc/hosts
	I1227 00:08:26.072251    9096 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1227 00:08:26.082323    9096 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 00:08:26.112455    9096 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 00:08:26.112455    9096 docker.go:601] Images already preloaded, skipping extraction
	I1227 00:08:26.120946    9096 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 00:08:26.149287    9096 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 00:08:26.149388    9096 cache_images.go:84] Images are preloaded, skipping loading
	I1227 00:08:26.159378    9096 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1227 00:08:26.200350    9096 cni.go:84] Creating CNI manager for ""
	I1227 00:08:26.200637    9096 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 00:08:26.200637    9096 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1227 00:08:26.200637    9096 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.21.179.115 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-178300 NodeName:pause-178300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.21.179.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.21.179.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 00:08:26.200980    9096 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.21.179.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "pause-178300"
	  kubeletExtraArgs:
	    node-ip: 172.21.179.115
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.21.179.115"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 00:08:26.201179    9096 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=pause-178300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.21.179.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-178300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1227 00:08:26.215207    9096 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1227 00:08:26.232852    9096 binaries.go:44] Found k8s binaries, skipping transfer
	I1227 00:08:26.246991    9096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 00:08:26.261961    9096 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1227 00:08:26.291572    9096 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 00:08:26.324211    9096 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I1227 00:08:26.373218    9096 ssh_runner.go:195] Run: grep 172.21.179.115	control-plane.minikube.internal$ /etc/hosts
	I1227 00:08:26.381011    9096 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-178300 for IP: 172.21.179.115
	I1227 00:08:26.381161    9096 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 00:08:26.381591    9096 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1227 00:08:26.382122    9096 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1227 00:08:26.383040    9096 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-178300\client.key
	I1227 00:08:26.383121    9096 certs.go:315] skipping minikube signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-178300\apiserver.key.60c18e01
	I1227 00:08:26.383648    9096 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-178300\proxy-client.key
	I1227 00:08:26.384971    9096 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem (1338 bytes)
	W1227 00:08:26.385058    9096 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728_empty.pem, impossibly tiny 0 bytes
	I1227 00:08:26.385058    9096 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1227 00:08:26.385585    9096 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1227 00:08:26.385885    9096 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1227 00:08:26.385885    9096 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1227 00:08:26.386696    9096 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem (1708 bytes)
	I1227 00:08:26.388201    9096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-178300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1227 00:08:26.460884    9096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-178300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 00:08:26.511794    9096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-178300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 00:08:26.555755    9096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-178300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 00:08:26.603154    9096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 00:08:26.654170    9096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 00:08:26.713942    9096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 00:08:26.779642    9096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 00:08:26.846903    9096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /usr/share/ca-certificates/107282.pem (1708 bytes)
	I1227 00:08:26.885820    9096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 00:08:26.927673    9096 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10728.pem --> /usr/share/ca-certificates/10728.pem (1338 bytes)
	I1227 00:08:26.974750    9096 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1227 00:08:27.018728    9096 ssh_runner.go:195] Run: openssl version
	I1227 00:08:27.040942    9096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10728.pem && ln -fs /usr/share/ca-certificates/10728.pem /etc/ssl/certs/10728.pem"
	I1227 00:08:27.077889    9096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10728.pem
	I1227 00:08:27.085529    9096 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 26 22:04 /usr/share/ca-certificates/10728.pem
	I1227 00:08:27.098648    9096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10728.pem
	I1227 00:08:27.119268    9096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10728.pem /etc/ssl/certs/51391683.0"
	I1227 00:08:27.149714    9096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107282.pem && ln -fs /usr/share/ca-certificates/107282.pem /etc/ssl/certs/107282.pem"
	I1227 00:08:27.184420    9096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107282.pem
	I1227 00:08:27.192449    9096 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 26 22:04 /usr/share/ca-certificates/107282.pem
	I1227 00:08:27.208309    9096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107282.pem
	I1227 00:08:27.229399    9096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107282.pem /etc/ssl/certs/3ec20f2e.0"
	I1227 00:08:27.258400    9096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1227 00:08:27.292185    9096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 00:08:27.299636    9096 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1227 00:08:27.313151    9096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 00:08:27.337178    9096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1227 00:08:27.367393    9096 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1227 00:08:27.386979    9096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 00:08:27.413029    9096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 00:08:27.435726    9096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 00:08:27.457208    9096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 00:08:27.480319    9096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 00:08:27.505054    9096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 00:08:27.514731    9096 kubeadm.go:404] StartCluster: {Name:pause-178300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:pause-178300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.21.179.115 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvi
dia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1227 00:08:27.525021    9096 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1227 00:08:27.566233    9096 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 00:08:27.585034    9096 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1227 00:08:27.585034    9096 kubeadm.go:636] restartCluster start
	I1227 00:08:27.600360    9096 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 00:08:27.618113    9096 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 00:08:27.619113    9096 kubeconfig.go:92] found "pause-178300" server: "https://172.21.179.115:8443"
	I1227 00:08:27.621131    9096 kapi.go:59] client config for pause-178300: &rest.Config{Host:"https://172.21.179.115:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-178300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-178300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 00:08:27.636111    9096 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 00:08:27.654066    9096 api_server.go:166] Checking apiserver status ...
	I1227 00:08:27.666713    9096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1227 00:08:27.687492    9096 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1227 00:08:28.165773    9096 api_server.go:166] Checking apiserver status ...
	I1227 00:08:28.180309    9096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1227 00:08:28.202944    9096 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1227 00:08:28.655858    9096 api_server.go:166] Checking apiserver status ...
	I1227 00:08:28.671039    9096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1227 00:08:28.692800    9096 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1227 00:08:29.162693    9096 api_server.go:166] Checking apiserver status ...
	I1227 00:08:29.176221    9096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1227 00:08:29.199234    9096 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1227 00:08:29.670083    9096 api_server.go:166] Checking apiserver status ...
	I1227 00:08:29.686281    9096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1227 00:08:29.707918    9096 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1227 00:08:30.162865    9096 api_server.go:166] Checking apiserver status ...
	I1227 00:08:30.176629    9096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1227 00:08:30.197307    9096 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1227 00:08:30.668315    9096 api_server.go:166] Checking apiserver status ...
	I1227 00:08:30.682338    9096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1227 00:08:30.707137    9096 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1227 00:08:31.161906    9096 api_server.go:166] Checking apiserver status ...
	I1227 00:08:31.177190    9096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1227 00:08:31.211474    9096 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1227 00:08:31.669008    9096 api_server.go:166] Checking apiserver status ...
	I1227 00:08:31.688885    9096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1227 00:08:31.748889    9096 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1227 00:08:32.157267    9096 api_server.go:166] Checking apiserver status ...
	I1227 00:08:32.172439    9096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1227 00:08:32.205256    9096 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1227 00:08:32.664383    9096 api_server.go:166] Checking apiserver status ...
	I1227 00:08:32.686384    9096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 00:08:32.768785    9096 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/8195/cgroup
	I1227 00:08:32.830533    9096 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod36d5516a2e991bff3582db1fd4757a77/3d0e6d0a21feef2edf22d489f01dd2b68c1603f7a7b5384089b1b5bb2c326d89"
	I1227 00:08:32.853444    9096 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod36d5516a2e991bff3582db1fd4757a77/3d0e6d0a21feef2edf22d489f01dd2b68c1603f7a7b5384089b1b5bb2c326d89/freezer.state
	I1227 00:08:32.926138    9096 api_server.go:204] freezer state: "THAWED"
	I1227 00:08:32.926342    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:08:37.588297    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 00:08:37.588297    9096 retry.go:31] will retry after 219.588733ms: https://172.21.179.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 00:08:37.814645    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:08:38.281852    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:38.281852    9096 retry.go:31] will retry after 333.963775ms: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:38.624738    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:08:38.640318    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:38.640318    9096 retry.go:31] will retry after 457.674448ms: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:39.099590    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:08:39.110421    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:39.110529    9096 retry.go:31] will retry after 573.761371ms: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:39.686817    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:08:39.696628    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:39.696878    9096 retry.go:31] will retry after 633.76318ms: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:40.338626    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:08:40.347630    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:40.347630    9096 retry.go:31] will retry after 804.214839ms: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:41.159816    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:08:41.170760    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:41.171373    9096 retry.go:31] will retry after 927.676764ms: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:42.099181    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:08:42.113571    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:42.113571    9096 retry.go:31] will retry after 1.044209493s: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:43.159915    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:08:43.180595    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:43.180595    9096 retry.go:31] will retry after 1.155621248s: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:44.337334    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:08:44.354667    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:44.354667    9096 retry.go:31] will retry after 2.229777627s: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:46.594247    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:08:46.603344    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:46.603592    9096 kubeadm.go:611] needs reconfigure: apiserver error: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:08:46.603592    9096 kubeadm.go:1135] stopping kube-system containers ...
	I1227 00:08:46.617795    9096 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1227 00:08:46.666894    9096 docker.go:469] Stopping containers: [a9088dec4fb6 8b6ae4916189 8171cfbf5ab2 0a5a40aa5e6c d66a1426a5db 3d0e6d0a21fe 60e1fde3ed4d 586dd0e9555c cd0ae2e3e41a 95cda17fa533 154bc9ebb206 a25058c31a5d 3b1e02e78a7c 605244dc8479 12cef0cb54b4 76b1e5be96a2 bfe92142e6fb 19efdc59ee92 1bb2ae6d13ea e262913fa9aa fb06efeea0a1 cb482d23561f]
	I1227 00:08:46.676903    9096 ssh_runner.go:195] Run: docker stop a9088dec4fb6 8b6ae4916189 8171cfbf5ab2 0a5a40aa5e6c d66a1426a5db 3d0e6d0a21fe 60e1fde3ed4d 586dd0e9555c cd0ae2e3e41a 95cda17fa533 154bc9ebb206 a25058c31a5d 3b1e02e78a7c 605244dc8479 12cef0cb54b4 76b1e5be96a2 bfe92142e6fb 19efdc59ee92 1bb2ae6d13ea e262913fa9aa fb06efeea0a1 cb482d23561f
	I1227 00:08:52.647985    9096 ssh_runner.go:235] Completed: docker stop a9088dec4fb6 8b6ae4916189 8171cfbf5ab2 0a5a40aa5e6c d66a1426a5db 3d0e6d0a21fe 60e1fde3ed4d 586dd0e9555c cd0ae2e3e41a 95cda17fa533 154bc9ebb206 a25058c31a5d 3b1e02e78a7c 605244dc8479 12cef0cb54b4 76b1e5be96a2 bfe92142e6fb 19efdc59ee92 1bb2ae6d13ea e262913fa9aa fb06efeea0a1 cb482d23561f: (5.9710838s)
	I1227 00:08:52.673417    9096 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1227 00:08:52.793340    9096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 00:08:52.821741    9096 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Dec 27 00:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Dec 27 00:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Dec 27 00:02 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Dec 27 00:02 /etc/kubernetes/scheduler.conf
	
	I1227 00:08:52.842671    9096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 00:08:52.890998    9096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 00:08:52.948201    9096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 00:08:52.968871    9096 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1227 00:08:52.999248    9096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 00:08:53.061247    9096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 00:08:53.078306    9096 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1227 00:08:53.092363    9096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 00:08:53.124934    9096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 00:08:53.140945    9096 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1227 00:08:53.141040    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:08:53.265706    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:08:54.315704    9096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0499976s)
	I1227 00:08:54.315704    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:08:54.656996    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:08:54.782620    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:08:54.896415    9096 api_server.go:52] waiting for apiserver process to appear ...
	I1227 00:08:54.910987    9096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 00:08:54.935090    9096 api_server.go:72] duration metric: took 38.6749ms to wait for apiserver process to appear ...
	I1227 00:08:54.935155    9096 api_server.go:88] waiting for apiserver healthz status ...
	I1227 00:08:54.935232    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:08:59.942971    9096 api_server.go:269] stopped: https://172.21.179.115:8443/healthz: Get "https://172.21.179.115:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1227 00:08:59.942971    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:04.951424    9096 api_server.go:269] stopped: https://172.21.179.115:8443/healthz: Get "https://172.21.179.115:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1227 00:09:04.951493    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:09.962563    9096 api_server.go:269] stopped: https://172.21.179.115:8443/healthz: Get "https://172.21.179.115:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1227 00:09:09.962645    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:12.806412    9096 api_server.go:269] stopped: https://172.21.179.115:8443/healthz: Get "https://172.21.179.115:8443/healthz": read tcp 172.21.176.1:62991->172.21.179.115:8443: wsarecv: An existing connection was forcibly closed by the remote host.
	I1227 00:09:12.806412    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:14.831817    9096 api_server.go:269] stopped: https://172.21.179.115:8443/healthz: Get "https://172.21.179.115:8443/healthz": dial tcp 172.21.179.115:8443: connectex: No connection could be made because the target machine actively refused it.
	I1227 00:09:14.831817    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:18.558135    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 00:09:18.558135    9096 api_server.go:103] status: https://172.21.179.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 00:09:18.558135    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:18.601624    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 00:09:18.602050    9096 api_server.go:103] status: https://172.21.179.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 00:09:18.940244    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:18.951278    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1227 00:09:18.951508    9096 api_server.go:103] status: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:09:19.450382    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:19.459806    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1227 00:09:19.460083    9096 api_server.go:103] status: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:09:19.940696    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:19.964611    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1227 00:09:19.964611    9096 api_server.go:103] status: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:09:20.448905    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:20.456907    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 200:
	ok
	I1227 00:09:20.472245    9096 api_server.go:141] control plane version: v1.28.4
	I1227 00:09:20.472351    9096 api_server.go:131] duration metric: took 25.5372053s to wait for apiserver health ...
	I1227 00:09:20.472405    9096 cni.go:84] Creating CNI manager for ""
	I1227 00:09:20.472405    9096 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 00:09:20.475383    9096 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1227 00:09:20.490174    9096 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1227 00:09:20.507349    9096 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1227 00:09:20.534579    9096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 00:09:20.548022    9096 system_pods.go:59] 6 kube-system pods found
	I1227 00:09:20.548022    9096 system_pods.go:61] "coredns-5dd5756b68-68vdw" [ba95ef6b-0fe8-41bb-9e19-6ffc57a37b95] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 00:09:20.548022    9096 system_pods.go:61] "etcd-pause-178300" [e7cce9d1-ebf4-4040-96ad-75bd234d231e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 00:09:20.548022    9096 system_pods.go:61] "kube-apiserver-pause-178300" [2ff43027-e352-4053-9aa6-ec12574be43d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 00:09:20.548022    9096 system_pods.go:61] "kube-controller-manager-pause-178300" [07ac7fdb-30d0-4a99-8a5a-af07da97d915] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 00:09:20.548022    9096 system_pods.go:61] "kube-proxy-7qklg" [e29bd3e6-d025-4c44-abb4-5f07e243d1d8] Running
	I1227 00:09:20.548022    9096 system_pods.go:61] "kube-scheduler-pause-178300" [0247a377-a670-4bd4-a997-0a86af2466b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 00:09:20.548022    9096 system_pods.go:74] duration metric: took 13.4432ms to wait for pod list to return data ...
	I1227 00:09:20.548022    9096 node_conditions.go:102] verifying NodePressure condition ...
	I1227 00:09:20.554098    9096 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1227 00:09:20.554098    9096 node_conditions.go:123] node cpu capacity is 2
	I1227 00:09:20.554098    9096 node_conditions.go:105] duration metric: took 6.0764ms to run NodePressure ...
	I1227 00:09:20.554098    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:09:20.934651    9096 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1227 00:09:20.946519    9096 kubeadm.go:787] kubelet initialised
	I1227 00:09:20.946519    9096 kubeadm.go:788] duration metric: took 11.8026ms waiting for restarted kubelet to initialise ...
	I1227 00:09:20.946694    9096 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1227 00:09:20.955786    9096 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:21.473959    9096 pod_ready.go:92] pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:21.473959    9096 pod_ready.go:81] duration metric: took 518.1186ms waiting for pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:21.473959    9096 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:23.496968    9096 pod_ready.go:102] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"False"
	I1227 00:09:25.991427    9096 pod_ready.go:102] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"False"
	I1227 00:09:28.000016    9096 pod_ready.go:102] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"False"
	I1227 00:09:30.498627    9096 pod_ready.go:102] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"False"
	I1227 00:09:32.993816    9096 pod_ready.go:102] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"False"
	I1227 00:09:34.025638    9096 pod_ready.go:92] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.025638    9096 pod_ready.go:81] duration metric: took 12.5516836s waiting for pod "etcd-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.025638    9096 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.056983    9096 pod_ready.go:92] pod "kube-apiserver-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.056983    9096 pod_ready.go:81] duration metric: took 31.3452ms waiting for pod "kube-apiserver-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.056983    9096 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.075800    9096 pod_ready.go:92] pod "kube-controller-manager-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.075800    9096 pod_ready.go:81] duration metric: took 18.8162ms waiting for pod "kube-controller-manager-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.075800    9096 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7qklg" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.100634    9096 pod_ready.go:92] pod "kube-proxy-7qklg" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.100634    9096 pod_ready.go:81] duration metric: took 24.8343ms waiting for pod "kube-proxy-7qklg" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.100634    9096 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.108368    9096 pod_ready.go:92] pod "kube-scheduler-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.108368    9096 pod_ready.go:81] duration metric: took 7.734ms waiting for pod "kube-scheduler-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.108368    9096 pod_ready.go:38] duration metric: took 13.1616782s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1227 00:09:34.108368    9096 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 00:09:34.129956    9096 ops.go:34] apiserver oom_adj: -16
	I1227 00:09:34.130036    9096 kubeadm.go:640] restartCluster took 1m6.5450256s
	I1227 00:09:34.130081    9096 kubeadm.go:406] StartCluster complete in 1m6.615372s
	I1227 00:09:34.130146    9096 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 00:09:34.130346    9096 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1227 00:09:34.131613    9096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 00:09:34.132872    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 00:09:34.132872    9096 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1227 00:09:34.136858    9096 out.go:177] * Enabled addons: 
	I1227 00:09:34.133793    9096 config.go:182] Loaded profile config "pause-178300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1227 00:09:34.139248    9096 addons.go:508] enable addons completed in 6.3757ms: enabled=[]
	I1227 00:09:34.148140    9096 kapi.go:59] client config for pause-178300: &rest.Config{Host:"https://172.21.179.115:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-178300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-178300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 00:09:34.153743    9096 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-178300" context rescaled to 1 replicas
	I1227 00:09:34.153871    9096 start.go:223] Will wait 6m0s for node &{Name: IP:172.21.179.115 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 00:09:34.156801    9096 out.go:177] * Verifying Kubernetes components...
	I1227 00:09:34.171210    9096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 00:09:34.278688    9096 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1227 00:09:34.278744    9096 node_ready.go:35] waiting up to 6m0s for node "pause-178300" to be "Ready" ...
	I1227 00:09:34.283361    9096 node_ready.go:49] node "pause-178300" has status "Ready":"True"
	I1227 00:09:34.283361    9096 node_ready.go:38] duration metric: took 4.6162ms waiting for node "pause-178300" to be "Ready" ...
	I1227 00:09:34.283361    9096 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1227 00:09:34.405767    9096 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.795719    9096 pod_ready.go:92] pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.795719    9096 pod_ready.go:81] duration metric: took 389.9522ms waiting for pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.795719    9096 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:35.194062    9096 pod_ready.go:92] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:35.194126    9096 pod_ready.go:81] duration metric: took 398.4075ms waiting for pod "etcd-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:35.194126    9096 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:35.590396    9096 pod_ready.go:92] pod "kube-apiserver-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:35.590396    9096 pod_ready.go:81] duration metric: took 396.1924ms waiting for pod "kube-apiserver-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:35.590498    9096 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.001547    9096 pod_ready.go:92] pod "kube-controller-manager-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:36.001547    9096 pod_ready.go:81] duration metric: took 411.0488ms waiting for pod "kube-controller-manager-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.001547    9096 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7qklg" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.396452    9096 pod_ready.go:92] pod "kube-proxy-7qklg" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:36.396452    9096 pod_ready.go:81] duration metric: took 394.9057ms waiting for pod "kube-proxy-7qklg" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.396452    9096 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.794869    9096 pod_ready.go:92] pod "kube-scheduler-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:36.794869    9096 pod_ready.go:81] duration metric: took 398.4165ms waiting for pod "kube-scheduler-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.794940    9096 pod_ready.go:38] duration metric: took 2.5115805s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1227 00:09:36.794940    9096 api_server.go:52] waiting for apiserver process to appear ...
	I1227 00:09:36.807933    9096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 00:09:36.829983    9096 api_server.go:72] duration metric: took 2.6761123s to wait for apiserver process to appear ...
	I1227 00:09:36.829983    9096 api_server.go:88] waiting for apiserver healthz status ...
	I1227 00:09:36.829983    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:36.840216    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 200:
	ok
	I1227 00:09:36.841949    9096 api_server.go:141] control plane version: v1.28.4
	I1227 00:09:36.842799    9096 api_server.go:131] duration metric: took 12.8162ms to wait for apiserver health ...
	I1227 00:09:36.842799    9096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 00:09:37.005545    9096 system_pods.go:59] 6 kube-system pods found
	I1227 00:09:37.005658    9096 system_pods.go:61] "coredns-5dd5756b68-68vdw" [ba95ef6b-0fe8-41bb-9e19-6ffc57a37b95] Running
	I1227 00:09:37.005658    9096 system_pods.go:61] "etcd-pause-178300" [e7cce9d1-ebf4-4040-96ad-75bd234d231e] Running
	I1227 00:09:37.005658    9096 system_pods.go:61] "kube-apiserver-pause-178300" [2ff43027-e352-4053-9aa6-ec12574be43d] Running
	I1227 00:09:37.005658    9096 system_pods.go:61] "kube-controller-manager-pause-178300" [07ac7fdb-30d0-4a99-8a5a-af07da97d915] Running
	I1227 00:09:37.005658    9096 system_pods.go:61] "kube-proxy-7qklg" [e29bd3e6-d025-4c44-abb4-5f07e243d1d8] Running
	I1227 00:09:37.005658    9096 system_pods.go:61] "kube-scheduler-pause-178300" [0247a377-a670-4bd4-a997-0a86af2466b7] Running
	I1227 00:09:37.005658    9096 system_pods.go:74] duration metric: took 162.8589ms to wait for pod list to return data ...
	I1227 00:09:37.005658    9096 default_sa.go:34] waiting for default service account to be created ...
	I1227 00:09:37.191345    9096 default_sa.go:45] found service account: "default"
	I1227 00:09:37.191345    9096 default_sa.go:55] duration metric: took 185.5493ms for default service account to be created ...
	I1227 00:09:37.191473    9096 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 00:09:37.404717    9096 system_pods.go:86] 6 kube-system pods found
	I1227 00:09:37.404717    9096 system_pods.go:89] "coredns-5dd5756b68-68vdw" [ba95ef6b-0fe8-41bb-9e19-6ffc57a37b95] Running
	I1227 00:09:37.404717    9096 system_pods.go:89] "etcd-pause-178300" [e7cce9d1-ebf4-4040-96ad-75bd234d231e] Running
	I1227 00:09:37.404796    9096 system_pods.go:89] "kube-apiserver-pause-178300" [2ff43027-e352-4053-9aa6-ec12574be43d] Running
	I1227 00:09:37.404796    9096 system_pods.go:89] "kube-controller-manager-pause-178300" [07ac7fdb-30d0-4a99-8a5a-af07da97d915] Running
	I1227 00:09:37.404796    9096 system_pods.go:89] "kube-proxy-7qklg" [e29bd3e6-d025-4c44-abb4-5f07e243d1d8] Running
	I1227 00:09:37.404796    9096 system_pods.go:89] "kube-scheduler-pause-178300" [0247a377-a670-4bd4-a997-0a86af2466b7] Running
	I1227 00:09:37.404796    9096 system_pods.go:126] duration metric: took 213.3231ms to wait for k8s-apps to be running ...
	I1227 00:09:37.404856    9096 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 00:09:37.417809    9096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 00:09:37.443598    9096 system_svc.go:56] duration metric: took 38.3087ms WaitForService to wait for kubelet.
	I1227 00:09:37.443598    9096 kubeadm.go:581] duration metric: took 3.2897272s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1227 00:09:37.443598    9096 node_conditions.go:102] verifying NodePressure condition ...
	I1227 00:09:37.604240    9096 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1227 00:09:37.604356    9096 node_conditions.go:123] node cpu capacity is 2
	I1227 00:09:37.604356    9096 node_conditions.go:105] duration metric: took 160.7584ms to run NodePressure ...
	I1227 00:09:37.604356    9096 start.go:228] waiting for startup goroutines ...
	I1227 00:09:37.604356    9096 start.go:233] waiting for cluster config update ...
	I1227 00:09:37.604356    9096 start.go:242] writing updated cluster config ...
	I1227 00:09:37.619816    9096 ssh_runner.go:195] Run: rm -f paused
	I1227 00:09:37.784765    9096 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1227 00:09:37.789081    9096 out.go:177] * Done! kubectl is now configured to use "pause-178300" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-178300 -n pause-178300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-178300 -n pause-178300: (12.4130619s)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-178300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-178300 logs -n 25: (8.7274944s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-923100             | running-upgrade-923100    | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:50 UTC |                     |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| ssh     | force-systemd-env-164200              | force-systemd-env-164200  | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:51 UTC | 26 Dec 23 23:51 UTC |
	|         | ssh docker info --format              |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-env-164200           | force-systemd-env-164200  | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:51 UTC | 26 Dec 23 23:52 UTC |
	| delete  | -p cert-expiration-721200             | cert-expiration-721200    | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:52 UTC | 26 Dec 23 23:53 UTC |
	| start   | -p cert-options-724600                | cert-options-724600       | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:53 UTC | 26 Dec 23 23:59 UTC |
	|         | --memory=2048                         |                           |                   |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |                   |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |                   |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |                   |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |                   |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-183800          | kubernetes-upgrade-183800 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:55 UTC | 26 Dec 23 23:55 UTC |
	| start   | -p kubernetes-upgrade-183800          | kubernetes-upgrade-183800 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:55 UTC | 27 Dec 23 00:00 UTC |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| delete  | -p running-upgrade-923100             | running-upgrade-923100    | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:55 UTC | 26 Dec 23 23:57 UTC |
	| start   | -p pause-178300 --memory=2048         | pause-178300              | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:57 UTC | 27 Dec 23 00:02 UTC |
	|         | --install-addons=false                |                           |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv            |                           |                   |         |                     |                     |
	| start   | -p stopped-upgrade-682800             | stopped-upgrade-682800    | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:58 UTC |                     |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| ssh     | cert-options-724600 ssh               | cert-options-724600       | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:59 UTC | 26 Dec 23 23:59 UTC |
	|         | openssl x509 -text -noout -in         |                           |                   |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |                   |         |                     |                     |
	| ssh     | -p cert-options-724600 -- sudo        | cert-options-724600       | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:59 UTC | 26 Dec 23 23:59 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |                   |         |                     |                     |
	| delete  | -p cert-options-724600                | cert-options-724600       | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:59 UTC | 27 Dec 23 00:00 UTC |
	| start   | -p docker-flags-107900                | docker-flags-107900       | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:00 UTC | 27 Dec 23 00:06 UTC |
	|         | --cache-images=false                  |                           |                   |         |                     |                     |
	|         | --memory=2048                         |                           |                   |         |                     |                     |
	|         | --install-addons=false                |                           |                   |         |                     |                     |
	|         | --wait=false                          |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |                   |         |                     |                     |
	|         | --docker-opt=debug                    |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-183800          | kubernetes-upgrade-183800 | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:00 UTC |                     |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-183800          | kubernetes-upgrade-183800 | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:00 UTC | 27 Dec 23 00:08 UTC |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| start   | -p pause-178300                       | pause-178300              | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:02 UTC | 27 Dec 23 00:09 UTC |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| delete  | -p stopped-upgrade-682800             | stopped-upgrade-682800    | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:04 UTC | 27 Dec 23 00:04 UTC |
	| start   | -p auto-344500 --memory=3072          | auto-344500               | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:04 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |                   |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| ssh     | docker-flags-107900 ssh               | docker-flags-107900       | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:06 UTC | 27 Dec 23 00:06 UTC |
	|         | sudo systemctl show docker            |                           |                   |         |                     |                     |
	|         | --property=Environment                |                           |                   |         |                     |                     |
	|         | --no-pager                            |                           |                   |         |                     |                     |
	| ssh     | docker-flags-107900 ssh               | docker-flags-107900       | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:06 UTC | 27 Dec 23 00:07 UTC |
	|         | sudo systemctl show docker            |                           |                   |         |                     |                     |
	|         | --property=ExecStart                  |                           |                   |         |                     |                     |
	|         | --no-pager                            |                           |                   |         |                     |                     |
	| delete  | -p docker-flags-107900                | docker-flags-107900       | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:07 UTC | 27 Dec 23 00:07 UTC |
	| start   | -p kindnet-344500                     | kindnet-344500            | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:07 UTC |                     |
	|         | --memory=3072                         |                           |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |                   |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |                   |         |                     |                     |
	|         | --cni=kindnet --driver=hyperv         |                           |                   |         |                     |                     |
	| delete  | -p kubernetes-upgrade-183800          | kubernetes-upgrade-183800 | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:08 UTC | 27 Dec 23 00:08 UTC |
	| start   | -p calico-344500 --memory=3072        | calico-344500             | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:08 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |                   |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |                   |         |                     |                     |
	|         | --cni=calico --driver=hyperv          |                           |                   |         |                     |                     |
	|---------|---------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/27 00:08:50
	Running on machine: minikube1
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 00:08:50.382837   13968 out.go:296] Setting OutFile to fd 1436 ...
	I1227 00:08:50.382837   13968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1227 00:08:50.382837   13968 out.go:309] Setting ErrFile to fd 1384...
	I1227 00:08:50.382837   13968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1227 00:08:50.407825   13968 out.go:303] Setting JSON to false
	I1227 00:08:50.411827   13968 start.go:128] hostinfo: {"hostname":"minikube1","uptime":10129,"bootTime":1703625601,"procs":207,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1227 00:08:50.411827   13968 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1227 00:08:50.415532   13968 out.go:177] * [calico-344500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1227 00:08:50.419948   13968 notify.go:220] Checking for updates...
	I1227 00:08:50.422603   13968 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1227 00:08:50.425227   13968 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 00:08:50.430068   13968 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1227 00:08:50.432737   13968 out.go:177]   - MINIKUBE_LOCATION=17857
	I1227 00:08:50.435282   13968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 00:08:49.017888    8152 main.go:141] libmachine: [stdout =====>] : 
	I1227 00:08:49.017888    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:08:50.020076    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:08:52.647985    9096 ssh_runner.go:235] Completed: docker stop a9088dec4fb6 8b6ae4916189 8171cfbf5ab2 0a5a40aa5e6c d66a1426a5db 3d0e6d0a21fe 60e1fde3ed4d 586dd0e9555c cd0ae2e3e41a 95cda17fa533 154bc9ebb206 a25058c31a5d 3b1e02e78a7c 605244dc8479 12cef0cb54b4 76b1e5be96a2 bfe92142e6fb 19efdc59ee92 1bb2ae6d13ea e262913fa9aa fb06efeea0a1 cb482d23561f: (5.9710838s)
	I1227 00:08:52.673417    9096 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1227 00:08:52.793340    9096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 00:08:52.821741    9096 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Dec 27 00:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Dec 27 00:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Dec 27 00:02 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Dec 27 00:02 /etc/kubernetes/scheduler.conf
	
	I1227 00:08:52.842671    9096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 00:08:52.890998    9096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 00:08:52.948201    9096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 00:08:52.968871    9096 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1227 00:08:52.999248    9096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 00:08:53.061247    9096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 00:08:53.078306    9096 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1227 00:08:53.092363    9096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 00:08:53.124934    9096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 00:08:53.140945    9096 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1227 00:08:53.141040    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:08:53.265706    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:08:50.446414   13968 config.go:182] Loaded profile config "auto-344500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1227 00:08:50.447708   13968 config.go:182] Loaded profile config "kindnet-344500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1227 00:08:50.448625   13968 config.go:182] Loaded profile config "pause-178300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1227 00:08:50.448988   13968 driver.go:392] Setting default libvirt URI to qemu:///system
	I1227 00:08:56.185723   13968 out.go:177] * Using the hyperv driver based on user configuration
	I1227 00:08:56.189745   13968 start.go:298] selected driver: hyperv
	I1227 00:08:56.189745   13968 start.go:902] validating driver "hyperv" against <nil>
	I1227 00:08:56.189745   13968 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 00:08:56.241725   13968 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1227 00:08:56.242729   13968 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 00:08:56.242729   13968 cni.go:84] Creating CNI manager for "calico"
	I1227 00:08:56.242729   13968 start_flags.go:318] Found "Calico" CNI - setting NetworkPlugin=cni
	I1227 00:08:56.242729   13968 start_flags.go:323] config:
	{Name:calico-344500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-344500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1227 00:08:56.243730   13968 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 00:08:56.247731   13968 out.go:177] * Starting control plane node calico-344500 in cluster calico-344500
	I1227 00:08:52.481779    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:08:52.481779    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:08:52.481779    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:08:55.626255    8152 main.go:141] libmachine: [stdout =====>] : 
	I1227 00:08:55.626255    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:08:54.315704    9096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0499976s)
	I1227 00:08:54.315704    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:08:54.656996    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:08:54.782620    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:08:54.896415    9096 api_server.go:52] waiting for apiserver process to appear ...
	I1227 00:08:54.910987    9096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 00:08:54.935090    9096 api_server.go:72] duration metric: took 38.6749ms to wait for apiserver process to appear ...
	I1227 00:08:54.935155    9096 api_server.go:88] waiting for apiserver healthz status ...
	I1227 00:08:54.935232    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:08:56.250732   13968 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1227 00:08:56.250732   13968 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1227 00:08:56.250732   13968 cache.go:56] Caching tarball of preloaded images
	I1227 00:08:56.250732   13968 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1227 00:08:56.250732   13968 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1227 00:08:56.250732   13968 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-344500\config.json ...
	I1227 00:08:56.251756   13968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-344500\config.json: {Name:mkf23e75ec2de1c49255a17e9b45e97016c94c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 00:08:56.252721   13968 start.go:365] acquiring machines lock for calico-344500: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1227 00:08:56.628847    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:08:59.433854    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:08:59.433854    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:08:59.433854    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:08:59.942971    9096 api_server.go:269] stopped: https://172.21.179.115:8443/healthz: Get "https://172.21.179.115:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1227 00:08:59.942971    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:02.065432    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:02.065432    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:02.065432    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:04.265216    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:04.265216    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:04.265216    8152 machine.go:88] provisioning docker machine ...
	I1227 00:09:04.265216    8152 buildroot.go:166] provisioning hostname "auto-344500"
	I1227 00:09:04.265216    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:04.951424    9096 api_server.go:269] stopped: https://172.21.179.115:8443/healthz: Get "https://172.21.179.115:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1227 00:09:04.951493    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:06.471400    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:06.471611    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:06.471611    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:09.027461    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:09.027461    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:09.034637    8152 main.go:141] libmachine: Using SSH client type: native
	I1227 00:09:09.035454    8152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.177.64 22 <nil> <nil>}
	I1227 00:09:09.035454    8152 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-344500 && echo "auto-344500" | sudo tee /etc/hostname
	I1227 00:09:09.198189    8152 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-344500
	
	I1227 00:09:09.198325    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:11.382296    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:11.382296    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:11.382296    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:09.962563    9096 api_server.go:269] stopped: https://172.21.179.115:8443/healthz: Get "https://172.21.179.115:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1227 00:09:09.962645    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:12.806412    9096 api_server.go:269] stopped: https://172.21.179.115:8443/healthz: Get "https://172.21.179.115:8443/healthz": read tcp 172.21.176.1:62991->172.21.179.115:8443: wsarecv: An existing connection was forcibly closed by the remote host.
	I1227 00:09:12.806412    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:13.978590    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:13.978632    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:13.985187    8152 main.go:141] libmachine: Using SSH client type: native
	I1227 00:09:13.985964    8152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.177.64 22 <nil> <nil>}
	I1227 00:09:13.986044    8152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-344500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-344500/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-344500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 00:09:14.139730    8152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1227 00:09:14.139827    8152 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1227 00:09:14.139884    8152 buildroot.go:174] setting up certificates
	I1227 00:09:14.139939    8152 provision.go:83] configureAuth start
	I1227 00:09:14.139989    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:16.365417    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:16.365483    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:16.365483    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:14.831817    9096 api_server.go:269] stopped: https://172.21.179.115:8443/healthz: Get "https://172.21.179.115:8443/healthz": dial tcp 172.21.179.115:8443: connectex: No connection could be made because the target machine actively refused it.
	I1227 00:09:14.831817    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:18.558135    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 00:09:18.558135    9096 api_server.go:103] status: https://172.21.179.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 00:09:18.558135    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:18.601624    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 00:09:18.602050    9096 api_server.go:103] status: https://172.21.179.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 00:09:18.940244    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:18.951278    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1227 00:09:18.951508    9096 api_server.go:103] status: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:09:19.450382    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:19.459806    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1227 00:09:19.460083    9096 api_server.go:103] status: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:09:19.940696    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:19.964611    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1227 00:09:19.964611    9096 api_server.go:103] status: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:09:20.448905    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:20.456907    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 200:
	ok
	I1227 00:09:20.472245    9096 api_server.go:141] control plane version: v1.28.4
	I1227 00:09:20.472351    9096 api_server.go:131] duration metric: took 25.5372053s to wait for apiserver health ...
	I1227 00:09:20.472405    9096 cni.go:84] Creating CNI manager for ""
	I1227 00:09:20.472405    9096 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 00:09:20.475383    9096 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1227 00:09:19.038167    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:19.038167    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:19.038167    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:21.271802    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:21.272078    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:21.272190    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:20.490174    9096 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1227 00:09:20.507349    9096 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1227 00:09:20.534579    9096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 00:09:20.548022    9096 system_pods.go:59] 6 kube-system pods found
	I1227 00:09:20.548022    9096 system_pods.go:61] "coredns-5dd5756b68-68vdw" [ba95ef6b-0fe8-41bb-9e19-6ffc57a37b95] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 00:09:20.548022    9096 system_pods.go:61] "etcd-pause-178300" [e7cce9d1-ebf4-4040-96ad-75bd234d231e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 00:09:20.548022    9096 system_pods.go:61] "kube-apiserver-pause-178300" [2ff43027-e352-4053-9aa6-ec12574be43d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 00:09:20.548022    9096 system_pods.go:61] "kube-controller-manager-pause-178300" [07ac7fdb-30d0-4a99-8a5a-af07da97d915] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 00:09:20.548022    9096 system_pods.go:61] "kube-proxy-7qklg" [e29bd3e6-d025-4c44-abb4-5f07e243d1d8] Running
	I1227 00:09:20.548022    9096 system_pods.go:61] "kube-scheduler-pause-178300" [0247a377-a670-4bd4-a997-0a86af2466b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 00:09:20.548022    9096 system_pods.go:74] duration metric: took 13.4432ms to wait for pod list to return data ...
	I1227 00:09:20.548022    9096 node_conditions.go:102] verifying NodePressure condition ...
	I1227 00:09:20.554098    9096 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1227 00:09:20.554098    9096 node_conditions.go:123] node cpu capacity is 2
	I1227 00:09:20.554098    9096 node_conditions.go:105] duration metric: took 6.0764ms to run NodePressure ...
	I1227 00:09:20.554098    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:09:20.934651    9096 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1227 00:09:20.946519    9096 kubeadm.go:787] kubelet initialised
	I1227 00:09:20.946519    9096 kubeadm.go:788] duration metric: took 11.8026ms waiting for restarted kubelet to initialise ...
	I1227 00:09:20.946694    9096 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1227 00:09:20.955786    9096 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:21.473959    9096 pod_ready.go:92] pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:21.473959    9096 pod_ready.go:81] duration metric: took 518.1186ms waiting for pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:21.473959    9096 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:23.496968    9096 pod_ready.go:102] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"False"
	I1227 00:09:23.848172    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:23.848172    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:23.848372    8152 provision.go:138] copyHostCerts
	I1227 00:09:23.848953    8152 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1227 00:09:23.849061    8152 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1227 00:09:23.849738    8152 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I1227 00:09:23.851649    8152 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1227 00:09:23.851746    8152 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1227 00:09:23.852057    8152 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1227 00:09:23.853506    8152 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1227 00:09:23.853617    8152 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1227 00:09:23.853977    8152 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1227 00:09:23.855350    8152 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.auto-344500 san=[172.21.177.64 172.21.177.64 localhost 127.0.0.1 minikube auto-344500]
	I1227 00:09:23.939923    8152 provision.go:172] copyRemoteCerts
	I1227 00:09:23.956045    8152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 00:09:23.956207    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:26.122882    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:26.122960    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:26.123053    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:25.991427    9096 pod_ready.go:102] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"False"
	I1227 00:09:28.000016    9096 pod_ready.go:102] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"False"
	I1227 00:09:28.775042    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:28.775042    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:28.775346    8152 sshutil.go:53] new ssh client: &{IP:172.21.177.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\auto-344500\id_rsa Username:docker}
	I1227 00:09:28.884647    8152 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.928456s)
	I1227 00:09:28.885486    8152 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1227 00:09:28.925576    8152 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 00:09:28.968538    8152 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 00:09:29.009880    8152 provision.go:86] duration metric: configureAuth took 14.869946s
	I1227 00:09:29.009946    8152 buildroot.go:189] setting minikube options for container-runtime
	I1227 00:09:29.010132    8152 config.go:182] Loaded profile config "auto-344500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1227 00:09:29.010132    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:31.186466    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:31.186466    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:31.186537    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:30.498627    9096 pod_ready.go:102] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"False"
	I1227 00:09:32.993816    9096 pod_ready.go:102] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"False"
	I1227 00:09:34.025638    9096 pod_ready.go:92] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.025638    9096 pod_ready.go:81] duration metric: took 12.5516836s waiting for pod "etcd-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.025638    9096 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.056983    9096 pod_ready.go:92] pod "kube-apiserver-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.056983    9096 pod_ready.go:81] duration metric: took 31.3452ms waiting for pod "kube-apiserver-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.056983    9096 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.075800    9096 pod_ready.go:92] pod "kube-controller-manager-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.075800    9096 pod_ready.go:81] duration metric: took 18.8162ms waiting for pod "kube-controller-manager-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.075800    9096 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7qklg" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.100634    9096 pod_ready.go:92] pod "kube-proxy-7qklg" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.100634    9096 pod_ready.go:81] duration metric: took 24.8343ms waiting for pod "kube-proxy-7qklg" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.100634    9096 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.108368    9096 pod_ready.go:92] pod "kube-scheduler-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.108368    9096 pod_ready.go:81] duration metric: took 7.734ms waiting for pod "kube-scheduler-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.108368    9096 pod_ready.go:38] duration metric: took 13.1616782s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1227 00:09:34.108368    9096 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 00:09:34.129956    9096 ops.go:34] apiserver oom_adj: -16
	I1227 00:09:34.130036    9096 kubeadm.go:640] restartCluster took 1m6.5450256s
	I1227 00:09:34.130081    9096 kubeadm.go:406] StartCluster complete in 1m6.615372s
	I1227 00:09:34.130146    9096 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 00:09:34.130346    9096 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1227 00:09:34.131613    9096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 00:09:34.132872    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 00:09:34.132872    9096 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1227 00:09:34.136858    9096 out.go:177] * Enabled addons: 
	I1227 00:09:34.133793    9096 config.go:182] Loaded profile config "pause-178300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1227 00:09:34.139248    9096 addons.go:508] enable addons completed in 6.3757ms: enabled=[]
	I1227 00:09:34.148140    9096 kapi.go:59] client config for pause-178300: &rest.Config{Host:"https://172.21.179.115:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-178300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-178300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 00:09:34.153743    9096 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-178300" context rescaled to 1 replicas
	I1227 00:09:34.153871    9096 start.go:223] Will wait 6m0s for node &{Name: IP:172.21.179.115 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 00:09:34.156801    9096 out.go:177] * Verifying Kubernetes components...
	I1227 00:09:34.171210    9096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 00:09:33.798391    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:33.798659    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:33.804708    8152 main.go:141] libmachine: Using SSH client type: native
	I1227 00:09:33.805446    8152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.177.64 22 <nil> <nil>}
	I1227 00:09:33.805611    8152 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 00:09:33.947210    8152 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1227 00:09:33.947210    8152 buildroot.go:70] root file system type: tmpfs
	I1227 00:09:33.947210    8152 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 00:09:33.947210    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:36.138744    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:36.138843    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:36.138843    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:34.278688    9096 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1227 00:09:34.278744    9096 node_ready.go:35] waiting up to 6m0s for node "pause-178300" to be "Ready" ...
	I1227 00:09:34.283361    9096 node_ready.go:49] node "pause-178300" has status "Ready":"True"
	I1227 00:09:34.283361    9096 node_ready.go:38] duration metric: took 4.6162ms waiting for node "pause-178300" to be "Ready" ...
	I1227 00:09:34.283361    9096 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1227 00:09:34.405767    9096 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.795719    9096 pod_ready.go:92] pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.795719    9096 pod_ready.go:81] duration metric: took 389.9522ms waiting for pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.795719    9096 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:35.194062    9096 pod_ready.go:92] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:35.194126    9096 pod_ready.go:81] duration metric: took 398.4075ms waiting for pod "etcd-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:35.194126    9096 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:35.590396    9096 pod_ready.go:92] pod "kube-apiserver-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:35.590396    9096 pod_ready.go:81] duration metric: took 396.1924ms waiting for pod "kube-apiserver-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:35.590498    9096 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.001547    9096 pod_ready.go:92] pod "kube-controller-manager-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:36.001547    9096 pod_ready.go:81] duration metric: took 411.0488ms waiting for pod "kube-controller-manager-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.001547    9096 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7qklg" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.396452    9096 pod_ready.go:92] pod "kube-proxy-7qklg" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:36.396452    9096 pod_ready.go:81] duration metric: took 394.9057ms waiting for pod "kube-proxy-7qklg" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.396452    9096 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.794869    9096 pod_ready.go:92] pod "kube-scheduler-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:36.794869    9096 pod_ready.go:81] duration metric: took 398.4165ms waiting for pod "kube-scheduler-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.794940    9096 pod_ready.go:38] duration metric: took 2.5115805s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1227 00:09:36.794940    9096 api_server.go:52] waiting for apiserver process to appear ...
	I1227 00:09:36.807933    9096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 00:09:36.829983    9096 api_server.go:72] duration metric: took 2.6761123s to wait for apiserver process to appear ...
	I1227 00:09:36.829983    9096 api_server.go:88] waiting for apiserver healthz status ...
	I1227 00:09:36.829983    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:36.840216    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 200:
	ok
	I1227 00:09:36.841949    9096 api_server.go:141] control plane version: v1.28.4
	I1227 00:09:36.842799    9096 api_server.go:131] duration metric: took 12.8162ms to wait for apiserver health ...
	I1227 00:09:36.842799    9096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 00:09:37.005545    9096 system_pods.go:59] 6 kube-system pods found
	I1227 00:09:37.005658    9096 system_pods.go:61] "coredns-5dd5756b68-68vdw" [ba95ef6b-0fe8-41bb-9e19-6ffc57a37b95] Running
	I1227 00:09:37.005658    9096 system_pods.go:61] "etcd-pause-178300" [e7cce9d1-ebf4-4040-96ad-75bd234d231e] Running
	I1227 00:09:37.005658    9096 system_pods.go:61] "kube-apiserver-pause-178300" [2ff43027-e352-4053-9aa6-ec12574be43d] Running
	I1227 00:09:37.005658    9096 system_pods.go:61] "kube-controller-manager-pause-178300" [07ac7fdb-30d0-4a99-8a5a-af07da97d915] Running
	I1227 00:09:37.005658    9096 system_pods.go:61] "kube-proxy-7qklg" [e29bd3e6-d025-4c44-abb4-5f07e243d1d8] Running
	I1227 00:09:37.005658    9096 system_pods.go:61] "kube-scheduler-pause-178300" [0247a377-a670-4bd4-a997-0a86af2466b7] Running
	I1227 00:09:37.005658    9096 system_pods.go:74] duration metric: took 162.8589ms to wait for pod list to return data ...
	I1227 00:09:37.005658    9096 default_sa.go:34] waiting for default service account to be created ...
	I1227 00:09:37.191345    9096 default_sa.go:45] found service account: "default"
	I1227 00:09:37.191345    9096 default_sa.go:55] duration metric: took 185.5493ms for default service account to be created ...
	I1227 00:09:37.191473    9096 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 00:09:37.404717    9096 system_pods.go:86] 6 kube-system pods found
	I1227 00:09:37.404717    9096 system_pods.go:89] "coredns-5dd5756b68-68vdw" [ba95ef6b-0fe8-41bb-9e19-6ffc57a37b95] Running
	I1227 00:09:37.404717    9096 system_pods.go:89] "etcd-pause-178300" [e7cce9d1-ebf4-4040-96ad-75bd234d231e] Running
	I1227 00:09:37.404796    9096 system_pods.go:89] "kube-apiserver-pause-178300" [2ff43027-e352-4053-9aa6-ec12574be43d] Running
	I1227 00:09:37.404796    9096 system_pods.go:89] "kube-controller-manager-pause-178300" [07ac7fdb-30d0-4a99-8a5a-af07da97d915] Running
	I1227 00:09:37.404796    9096 system_pods.go:89] "kube-proxy-7qklg" [e29bd3e6-d025-4c44-abb4-5f07e243d1d8] Running
	I1227 00:09:37.404796    9096 system_pods.go:89] "kube-scheduler-pause-178300" [0247a377-a670-4bd4-a997-0a86af2466b7] Running
	I1227 00:09:37.404796    9096 system_pods.go:126] duration metric: took 213.3231ms to wait for k8s-apps to be running ...
	I1227 00:09:37.404856    9096 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 00:09:37.417809    9096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 00:09:37.443598    9096 system_svc.go:56] duration metric: took 38.3087ms WaitForService to wait for kubelet.
	I1227 00:09:37.443598    9096 kubeadm.go:581] duration metric: took 3.2897272s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1227 00:09:37.443598    9096 node_conditions.go:102] verifying NodePressure condition ...
	I1227 00:09:37.604240    9096 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1227 00:09:37.604356    9096 node_conditions.go:123] node cpu capacity is 2
	I1227 00:09:37.604356    9096 node_conditions.go:105] duration metric: took 160.7584ms to run NodePressure ...
	I1227 00:09:37.604356    9096 start.go:228] waiting for startup goroutines ...
	I1227 00:09:37.604356    9096 start.go:233] waiting for cluster config update ...
	I1227 00:09:37.604356    9096 start.go:242] writing updated cluster config ...
	I1227 00:09:37.619816    9096 ssh_runner.go:195] Run: rm -f paused
	I1227 00:09:37.784765    9096 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1227 00:09:37.789081    9096 out.go:177] * Done! kubectl is now configured to use "pause-178300" cluster and "default" namespace by default
	I1227 00:09:38.848756    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:38.848756    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:38.853521    8152 main.go:141] libmachine: Using SSH client type: native
	I1227 00:09:38.854812    8152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.177.64 22 <nil> <nil>}
	I1227 00:09:38.854948    8152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 00:09:39.005637    8152 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 00:09:39.005738    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:41.217667    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:41.217894    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:41.218178    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:43.853884    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:43.853884    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:43.858608    8152 main.go:141] libmachine: Using SSH client type: native
	I1227 00:09:43.859237    8152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.177.64 22 <nil> <nil>}
	I1227 00:09:43.859237    8152 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 00:09:45.037067    8152 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1227 00:09:45.037128    8152 machine.go:91] provisioned docker machine in 40.7719258s
	I1227 00:09:45.037190    8152 client.go:171] LocalClient.Create took 1m55.9465223s
	I1227 00:09:45.037190    8152 start.go:167] duration metric: libmachine.API.Create for "auto-344500" took 1m55.9472234s
	I1227 00:09:45.037342    8152 start.go:300] post-start starting for "auto-344500" (driver="hyperv")
	I1227 00:09:45.037401    8152 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 00:09:45.051345    8152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 00:09:45.051708    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	
	
	==> Docker <==
	-- Journal begins at Wed 2023-12-27 00:00:28 UTC, ends at Wed 2023-12-27 00:09:58 UTC. --
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.262132511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.262400111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.262433811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.262451011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.322352309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.322590609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.322683309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.322765109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.352445758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.352564458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.353005557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.353040357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.925367688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.925876487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.926173087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.926929785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:18 pause-178300 cri-dockerd[7640]: time="2023-12-27T00:09:18Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Dec 27 00:09:19 pause-178300 dockerd[7370]: time="2023-12-27T00:09:19.402379253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 27 00:09:19 pause-178300 dockerd[7370]: time="2023-12-27T00:09:19.402524053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:19 pause-178300 dockerd[7370]: time="2023-12-27T00:09:19.402545853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 27 00:09:19 pause-178300 dockerd[7370]: time="2023-12-27T00:09:19.402559853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:19 pause-178300 dockerd[7370]: time="2023-12-27T00:09:19.413350935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 27 00:09:19 pause-178300 dockerd[7370]: time="2023-12-27T00:09:19.413656535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:19 pause-178300 dockerd[7370]: time="2023-12-27T00:09:19.413910734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 27 00:09:19 pause-178300 dockerd[7370]: time="2023-12-27T00:09:19.414099634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f5b7bbc89d6ae       83f6cc407eed8       39 seconds ago       Running             kube-proxy                2                   7a23a955bf12a       kube-proxy-7qklg
	57f089f59d89d       ead0a4a53df89       39 seconds ago       Running             coredns                   2                   99a37e2a3c12b       coredns-5dd5756b68-68vdw
	98ec605b2087a       7fe0e6f37db33       44 seconds ago       Running             kube-apiserver            3                   78cfdb6cbb5f8       kube-apiserver-pause-178300
	15fd97aaef6c3       e3db313c6dbc0       45 seconds ago       Running             kube-scheduler            2                   9c77f16148701       kube-scheduler-pause-178300
	fbb65e0aaa078       d058aa5ab969c       45 seconds ago       Running             kube-controller-manager   2                   4f1b8d0295f64       kube-controller-manager-pause-178300
	3e3ed1fc66542       73deb9a3f7025       45 seconds ago       Running             etcd                      2                   c172fb604a8a3       etcd-pause-178300
	9cfb1e684b435       7fe0e6f37db33       About a minute ago   Exited              kube-apiserver            2                   78cfdb6cbb5f8       kube-apiserver-pause-178300
	a9088dec4fb61       83f6cc407eed8       About a minute ago   Exited              kube-proxy                1                   a25058c31a5dc       kube-proxy-7qklg
	8b6ae4916189a       ead0a4a53df89       About a minute ago   Exited              coredns                   1                   60e1fde3ed4de       coredns-5dd5756b68-68vdw
	8171cfbf5ab26       e3db313c6dbc0       About a minute ago   Exited              kube-scheduler            1                   154bc9ebb206a       kube-scheduler-pause-178300
	0a5a40aa5e6ce       73deb9a3f7025       About a minute ago   Exited              etcd                      1                   95cda17fa5332       etcd-pause-178300
	d66a1426a5dba       d058aa5ab969c       About a minute ago   Exited              kube-controller-manager   1                   586dd0e9555c7       kube-controller-manager-pause-178300
	
	
	==> coredns [57f089f59d89] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 727859d02e49f226305255353d6ea73d4e25f577656e92efc00f8bdfe7b9e0a41c48e607fb0e54b875432612a89a9ff227ec88b4a4c86d52ce98698e96c5359a
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52475 - 64947 "HINFO IN 2163937736029644199.1187910723709867958. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.05508431s
	
	
	==> coredns [8b6ae4916189] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 727859d02e49f226305255353d6ea73d4e25f577656e92efc00f8bdfe7b9e0a41c48e607fb0e54b875432612a89a9ff227ec88b4a4c86d52ce98698e96c5359a
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60953 - 40417 "HINFO IN 4534999005635974088.6528223528083456404. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.056562601s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-178300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-178300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=pause-178300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_27T00_02_31_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Dec 2023 00:02:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-178300
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Dec 2023 00:09:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Dec 2023 00:09:18 +0000   Wed, 27 Dec 2023 00:02:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Dec 2023 00:09:18 +0000   Wed, 27 Dec 2023 00:02:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Dec 2023 00:09:18 +0000   Wed, 27 Dec 2023 00:02:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Dec 2023 00:09:18 +0000   Wed, 27 Dec 2023 00:02:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.21.179.115
	  Hostname:    pause-178300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	System Info:
	  Machine ID:                 007ddc59221046b5828896e1dada895d
	  System UUID:                fc88b303-7776-3d46-9824-c9f0cd224106
	  Boot ID:                    f7dc9f5a-452c-4536-93d1-5c26dc3a92c6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-68vdw                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m15s
	  kube-system                 etcd-pause-178300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m26s
	  kube-system                 kube-apiserver-pause-178300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	  kube-system                 kube-controller-manager-pause-178300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	  kube-system                 kube-proxy-7qklg                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-scheduler-pause-178300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m12s                  kube-proxy       
	  Normal  Starting                 39s                    kube-proxy       
	  Normal  Starting                 80s                    kube-proxy       
	  Normal  Starting                 7m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m38s (x8 over 7m38s)  kubelet          Node pause-178300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m38s (x8 over 7m38s)  kubelet          Node pause-178300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m38s (x7 over 7m38s)  kubelet          Node pause-178300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     7m27s                  kubelet          Node pause-178300 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    7m27s                  kubelet          Node pause-178300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m27s                  kubelet          Node pause-178300 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m27s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m22s                  kubelet          Node pause-178300 status is now: NodeReady
	  Normal  RegisteredNode           7m16s                  node-controller  Node pause-178300 event: Registered Node pause-178300 in Controller
	  Normal  Starting                 64s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node pause-178300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node pause-178300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x7 over 63s)      kubelet          Node pause-178300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  63s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27s                    node-controller  Node pause-178300 event: Registered Node pause-178300 in Controller
	
	
	==> dmesg <==
	[  +0.176100] systemd-fstab-generator[981]: Ignoring "noauto" for root device
	[  +0.201900] systemd-fstab-generator[994]: Ignoring "noauto" for root device
	[  +1.386701] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.421274] systemd-fstab-generator[1152]: Ignoring "noauto" for root device
	[  +0.196228] systemd-fstab-generator[1163]: Ignoring "noauto" for root device
	[  +0.188844] systemd-fstab-generator[1174]: Ignoring "noauto" for root device
	[  +0.193436] systemd-fstab-generator[1185]: Ignoring "noauto" for root device
	[  +0.239791] systemd-fstab-generator[1199]: Ignoring "noauto" for root device
	[Dec27 00:02] systemd-fstab-generator[1306]: Ignoring "noauto" for root device
	[  +2.294244] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.638530] systemd-fstab-generator[1688]: Ignoring "noauto" for root device
	[  +0.971815] kauditd_printk_skb: 29 callbacks suppressed
	[ +11.067719] systemd-fstab-generator[2712]: Ignoring "noauto" for root device
	[Dec27 00:08] systemd-fstab-generator[6914]: Ignoring "noauto" for root device
	[  +0.630962] systemd-fstab-generator[6953]: Ignoring "noauto" for root device
	[  +0.283846] systemd-fstab-generator[6964]: Ignoring "noauto" for root device
	[  +0.318195] systemd-fstab-generator[6986]: Ignoring "noauto" for root device
	[  +0.325178] kauditd_printk_skb: 23 callbacks suppressed
	[ +11.923220] systemd-fstab-generator[7522]: Ignoring "noauto" for root device
	[  +0.223646] systemd-fstab-generator[7533]: Ignoring "noauto" for root device
	[  +0.200911] systemd-fstab-generator[7544]: Ignoring "noauto" for root device
	[  +0.190045] systemd-fstab-generator[7555]: Ignoring "noauto" for root device
	[  +0.263826] systemd-fstab-generator[7575]: Ignoring "noauto" for root device
	[  +8.042695] kauditd_printk_skb: 29 callbacks suppressed
	[ +20.946618] systemd-fstab-generator[9569]: Ignoring "noauto" for root device
	
	
	==> etcd [0a5a40aa5e6c] <==
	{"level":"warn","ts":"2023-12-27T00:08:47.051579Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.349828ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1116202938423774998 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-pause-178300.17a487a4173b4f4f\" mod_revision:530 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-pause-178300.17a487a4173b4f4f\" value_size:747 lease:1116202938423774855 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-apiserver-pause-178300.17a487a4173b4f4f\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-27T00:08:47.051696Z","caller":"traceutil/trace.go:171","msg":"trace[16200084] linearizableReadLoop","detail":"{readStateIndex:631; appliedIndex:630; }","duration":"275.766636ms","start":"2023-12-27T00:08:46.775904Z","end":"2023-12-27T00:08:47.051671Z","steps":["trace[16200084] 'read index received'  (duration: 140.18461ms)","trace[16200084] 'applied index is now lower than readState.Index'  (duration: 135.580926ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-27T00:08:47.051764Z","caller":"traceutil/trace.go:171","msg":"trace[184720957] transaction","detail":"{read_only:false; response_revision:549; number_of_response:1; }","duration":"277.231432ms","start":"2023-12-27T00:08:46.774524Z","end":"2023-12-27T00:08:47.051755Z","steps":["trace[184720957] 'process raft request'  (duration: 141.633105ms)","trace[184720957] 'compare'  (duration: 134.58153ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-27T00:08:47.052065Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"276.290235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:public-info-viewer\" ","response":"range_response_count:1 size:783"}
	{"level":"info","ts":"2023-12-27T00:08:47.052122Z","caller":"traceutil/trace.go:171","msg":"trace[1921880010] range","detail":"{range_begin:/registry/clusterrolebindings/system:public-info-viewer; range_end:; response_count:1; response_revision:549; }","duration":"276.349535ms","start":"2023-12-27T00:08:46.775765Z","end":"2023-12-27T00:08:47.052114Z","steps":["trace[1921880010] 'agreement among raft nodes before linearized reading'  (duration: 276.231535ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-27T00:08:47.052233Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.826543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-27T00:08:47.05232Z","caller":"traceutil/trace.go:171","msg":"trace[393002975] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:549; }","duration":"165.913742ms","start":"2023-12-27T00:08:46.886399Z","end":"2023-12-27T00:08:47.052313Z","steps":["trace[393002975] 'agreement among raft nodes before linearized reading'  (duration: 165.813643ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-27T00:08:47.444594Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-12-27T00:08:47.444658Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-178300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.21.179.115:2380"],"advertise-client-urls":["https://172.21.179.115:2379"]}
	{"level":"warn","ts":"2023-12-27T00:08:47.444785Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-27T00:08:47.444981Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	WARNING: 2023/12/27 00:08:47 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2023-12-27T00:08:47.445508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"384.462861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:kube-dns\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2023-12-27T00:08:47.445547Z","caller":"traceutil/trace.go:171","msg":"trace[23265992] range","detail":"{range_begin:/registry/clusterrolebindings/system:kube-dns; range_end:; }","duration":"384.582361ms","start":"2023-12-27T00:08:47.060953Z","end":"2023-12-27T00:08:47.445535Z","steps":["trace[23265992] 'agreement among raft nodes before linearized reading'  (duration: 384.461561ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-27T00:08:47.445574Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-27T00:08:47.060945Z","time spent":"384.618161ms","remote":"127.0.0.1:48306","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":0,"response size":0,"request content":"key:\"/registry/clusterrolebindings/system:kube-dns\" "}
	WARNING: 2023/12/27 00:08:47 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2023-12-27T00:08:47.445776Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-27T00:08:47.059879Z","time spent":"385.889957ms","remote":"127.0.0.1:48242","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	WARNING: 2023/12/27 00:08:47 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2023-12-27T00:08:47.467311Z","caller":"traceutil/trace.go:171","msg":"trace[2032799466] linearizableReadLoop","detail":"{readStateIndex:632; appliedIndex:631; }","duration":"406.272902ms","start":"2023-12-27T00:08:47.061021Z","end":"2023-12-27T00:08:47.467294Z","steps":["trace[2032799466] 'read index received'  (duration: 309.285064ms)","trace[2032799466] 'applied index is now lower than readState.Index'  (duration: 96.986538ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-27T00:08:47.514604Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 172.21.179.115:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-27T00:08:47.514663Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 172.21.179.115:2379: use of closed network connection"}
	{"level":"info","ts":"2023-12-27T00:08:47.51473Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"117d4d98b7f20f7d","current-leader-member-id":"117d4d98b7f20f7d"}
	{"level":"info","ts":"2023-12-27T00:08:48.117283Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"172.21.179.115:2380"}
	{"level":"info","ts":"2023-12-27T00:08:48.117566Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"172.21.179.115:2380"}
	{"level":"info","ts":"2023-12-27T00:08:48.117597Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-178300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.21.179.115:2380"],"advertise-client-urls":["https://172.21.179.115:2379"]}
	
	
	==> etcd [3e3ed1fc6654] <==
	{"level":"info","ts":"2023-12-27T00:09:14.513705Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-27T00:09:14.513713Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-27T00:09:14.513937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"117d4d98b7f20f7d switched to configuration voters=(1260248789050068861)"}
	{"level":"info","ts":"2023-12-27T00:09:14.513996Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"abd8a480d9da5ff4","local-member-id":"117d4d98b7f20f7d","added-peer-id":"117d4d98b7f20f7d","added-peer-peer-urls":["https://172.21.179.115:2380"]}
	{"level":"info","ts":"2023-12-27T00:09:14.514084Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"abd8a480d9da5ff4","local-member-id":"117d4d98b7f20f7d","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-27T00:09:14.514117Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-27T00:09:14.54197Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-27T00:09:14.542205Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"117d4d98b7f20f7d","initial-advertise-peer-urls":["https://172.21.179.115:2380"],"listen-peer-urls":["https://172.21.179.115:2380"],"advertise-client-urls":["https://172.21.179.115:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.21.179.115:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-27T00:09:14.542291Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-27T00:09:14.542373Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.21.179.115:2380"}
	{"level":"info","ts":"2023-12-27T00:09:14.542381Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.21.179.115:2380"}
	{"level":"info","ts":"2023-12-27T00:09:15.779269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"117d4d98b7f20f7d is starting a new election at term 3"}
	{"level":"info","ts":"2023-12-27T00:09:15.779506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"117d4d98b7f20f7d became pre-candidate at term 3"}
	{"level":"info","ts":"2023-12-27T00:09:15.779673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"117d4d98b7f20f7d received MsgPreVoteResp from 117d4d98b7f20f7d at term 3"}
	{"level":"info","ts":"2023-12-27T00:09:15.779897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"117d4d98b7f20f7d became candidate at term 4"}
	{"level":"info","ts":"2023-12-27T00:09:15.779917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"117d4d98b7f20f7d received MsgVoteResp from 117d4d98b7f20f7d at term 4"}
	{"level":"info","ts":"2023-12-27T00:09:15.779929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"117d4d98b7f20f7d became leader at term 4"}
	{"level":"info","ts":"2023-12-27T00:09:15.780038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 117d4d98b7f20f7d elected leader 117d4d98b7f20f7d at term 4"}
	{"level":"info","ts":"2023-12-27T00:09:15.784645Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"117d4d98b7f20f7d","local-member-attributes":"{Name:pause-178300 ClientURLs:[https://172.21.179.115:2379]}","request-path":"/0/members/117d4d98b7f20f7d/attributes","cluster-id":"abd8a480d9da5ff4","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-27T00:09:15.785065Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-27T00:09:15.785795Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-27T00:09:15.805187Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-27T00:09:15.806242Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.21.179.115:2379"}
	{"level":"info","ts":"2023-12-27T00:09:15.816698Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-27T00:09:15.816767Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:09:59 up 9 min,  0 users,  load average: 1.58, 0.99, 0.46
	Linux pause-178300 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [98ec605b2087] <==
	I1227 00:09:18.538596       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1227 00:09:18.522137       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1227 00:09:18.522089       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1227 00:09:18.704716       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1227 00:09:18.704800       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1227 00:09:18.704809       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1227 00:09:18.704825       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1227 00:09:18.706130       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1227 00:09:18.706786       1 aggregator.go:166] initial CRD sync complete...
	I1227 00:09:18.706826       1 autoregister_controller.go:141] Starting autoregister controller
	I1227 00:09:18.706833       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 00:09:18.706839       1 cache.go:39] Caches are synced for autoregister controller
	I1227 00:09:18.722874       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 00:09:18.723686       1 shared_informer.go:318] Caches are synced for configmaps
	I1227 00:09:18.754812       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1227 00:09:18.762847       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 00:09:19.568412       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1227 00:09:20.083643       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.21.179.115]
	I1227 00:09:20.086798       1 controller.go:624] quota admission added evaluator for: endpoints
	I1227 00:09:20.098452       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 00:09:20.733632       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1227 00:09:20.755078       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1227 00:09:20.835580       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1227 00:09:20.903976       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 00:09:20.917693       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [9cfb1e684b43] <==
	I1227 00:08:52.079137       1 server.go:148] Version: v1.28.4
	I1227 00:08:52.079356       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1227 00:08:52.781941       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1227 00:08:52.783751       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	W1227 00:08:52.784804       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1227 00:08:52.796789       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1227 00:08:52.796862       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1227 00:08:52.797157       1 instance.go:298] Using reconciler: lease
	W1227 00:08:52.799222       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:53.783339       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:53.786111       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:53.799881       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:55.362405       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:55.525728       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:55.555413       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:57.580971       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:57.607010       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:58.361504       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:09:00.946711       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:09:01.881909       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:09:01.907691       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:09:07.671612       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:09:07.730007       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:09:08.375862       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1227 00:09:12.799102       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [d66a1426a5db] <==
	I1227 00:08:34.466739       1 serving.go:348] Generated self-signed cert in-memory
	I1227 00:08:35.368504       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I1227 00:08:35.368555       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 00:08:35.370606       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1227 00:08:35.370883       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1227 00:08:35.373887       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1227 00:08:35.374587       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [fbb65e0aaa07] <==
	I1227 00:09:31.054105       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1227 00:09:31.054203       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1227 00:09:31.054595       1 shared_informer.go:318] Caches are synced for persistent volume
	I1227 00:09:31.054925       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1227 00:09:31.058014       1 shared_informer.go:318] Caches are synced for GC
	I1227 00:09:31.063132       1 shared_informer.go:318] Caches are synced for service account
	I1227 00:09:31.063449       1 shared_informer.go:318] Caches are synced for taint
	I1227 00:09:31.064392       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1227 00:09:31.064745       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-178300"
	I1227 00:09:31.064933       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 00:09:31.065037       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1227 00:09:31.065183       1 taint_manager.go:210] "Sending events to api server"
	I1227 00:09:31.068331       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1227 00:09:31.068858       1 event.go:307] "Event occurred" object="pause-178300" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-178300 event: Registered Node pause-178300 in Controller"
	I1227 00:09:31.084370       1 shared_informer.go:318] Caches are synced for HPA
	I1227 00:09:31.110301       1 shared_informer.go:318] Caches are synced for disruption
	I1227 00:09:31.112078       1 shared_informer.go:318] Caches are synced for stateful set
	I1227 00:09:31.122109       1 shared_informer.go:318] Caches are synced for endpoint
	I1227 00:09:31.152810       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 00:09:31.181165       1 shared_informer.go:318] Caches are synced for cronjob
	I1227 00:09:31.188662       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1227 00:09:31.197502       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 00:09:31.598438       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 00:09:31.598495       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1227 00:09:31.617678       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [a9088dec4fb6] <==
	I1227 00:08:35.447598       1 server_others.go:69] "Using iptables proxy"
	I1227 00:08:38.286089       1 node.go:141] Successfully retrieved node IP: 172.21.179.115
	I1227 00:08:38.360797       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1227 00:08:38.360907       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1227 00:08:38.365523       1 server_others.go:152] "Using iptables Proxier"
	I1227 00:08:38.365742       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 00:08:38.366213       1 server.go:846] "Version info" version="v1.28.4"
	I1227 00:08:38.366480       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 00:08:38.368906       1 config.go:188] "Starting service config controller"
	I1227 00:08:38.369141       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 00:08:38.369470       1 config.go:97] "Starting endpoint slice config controller"
	I1227 00:08:38.369655       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 00:08:38.373171       1 config.go:315] "Starting node config controller"
	I1227 00:08:38.373449       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 00:08:38.470238       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1227 00:08:38.470398       1 shared_informer.go:318] Caches are synced for service config
	I1227 00:08:38.473889       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [f5b7bbc89d6a] <==
	I1227 00:09:19.648644       1 server_others.go:69] "Using iptables proxy"
	I1227 00:09:19.676503       1 node.go:141] Successfully retrieved node IP: 172.21.179.115
	I1227 00:09:19.732610       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1227 00:09:19.732654       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1227 00:09:19.739665       1 server_others.go:152] "Using iptables Proxier"
	I1227 00:09:19.741095       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 00:09:19.742050       1 server.go:846] "Version info" version="v1.28.4"
	I1227 00:09:19.742816       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 00:09:19.745207       1 config.go:188] "Starting service config controller"
	I1227 00:09:19.745595       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 00:09:19.746032       1 config.go:97] "Starting endpoint slice config controller"
	I1227 00:09:19.746440       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 00:09:19.747701       1 config.go:315] "Starting node config controller"
	I1227 00:09:19.747971       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 00:09:19.846341       1 shared_informer.go:318] Caches are synced for service config
	I1227 00:09:19.846743       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1227 00:09:19.849501       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [15fd97aaef6c] <==
	I1227 00:09:15.718162       1 serving.go:348] Generated self-signed cert in-memory
	W1227 00:09:18.637532       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 00:09:18.637729       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 00:09:18.637951       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 00:09:18.638226       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 00:09:18.726033       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1227 00:09:18.726176       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 00:09:18.729127       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 00:09:18.729352       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1227 00:09:18.731160       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1227 00:09:18.731445       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1227 00:09:18.830388       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8171cfbf5ab2] <==
	I1227 00:08:35.068380       1 serving.go:348] Generated self-signed cert in-memory
	W1227 00:08:37.605034       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 00:08:37.605356       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 00:08:37.605572       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 00:08:37.605820       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 00:08:37.798139       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1227 00:08:37.798373       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 00:08:37.808097       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1227 00:08:37.808209       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 00:08:37.811373       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1227 00:08:37.808763       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1227 00:08:37.912882       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1227 00:08:47.507745       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1227 00:08:47.508038       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1227 00:08:47.508243       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1227 00:08:47.508674       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	-- Journal begins at Wed 2023-12-27 00:00:28 UTC, ends at Wed 2023-12-27 00:09:59 UTC. --
	Dec 27 00:09:13 pause-178300 kubelet[9575]: I1227 00:09:13.966292    9575 scope.go:117] "RemoveContainer" containerID="d66a1426a5dbaa737c5a523391faadc567726fef632d95be44c9fb797bb7b09a"
	Dec 27 00:09:13 pause-178300 kubelet[9575]: I1227 00:09:13.982779    9575 scope.go:117] "RemoveContainer" containerID="8171cfbf5ab26d996daafe94ebbdd7bbf5ba920b356a5e7eb7060b1f0f1a6fbc"
	Dec 27 00:09:14 pause-178300 kubelet[9575]: I1227 00:09:14.065776    9575 kubelet_node_status.go:70] "Attempting to register node" node="pause-178300"
	Dec 27 00:09:14 pause-178300 kubelet[9575]: E1227 00:09:14.066598    9575 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.21.179.115:8443: connect: connection refused" node="pause-178300"
	Dec 27 00:09:14 pause-178300 kubelet[9575]: E1227 00:09:14.211150    9575 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-178300?timeout=10s\": dial tcp 172.21.179.115:8443: connect: connection refused" interval="800ms"
	Dec 27 00:09:14 pause-178300 kubelet[9575]: I1227 00:09:14.663512    9575 scope.go:117] "RemoveContainer" containerID="9cfb1e684b435836d36064eb2e1d7241b91b4fd4867cb08229666fa581d95f58"
	Dec 27 00:09:15 pause-178300 kubelet[9575]: E1227 00:09:15.011921    9575 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-178300?timeout=10s\": dial tcp 172.21.179.115:8443: connect: connection refused" interval="1.6s"
	Dec 27 00:09:15 pause-178300 kubelet[9575]: E1227 00:09:15.229148    9575 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"pause-178300\" not found"
	Dec 27 00:09:15 pause-178300 kubelet[9575]: I1227 00:09:15.684852    9575 kubelet_node_status.go:70] "Attempting to register node" node="pause-178300"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.801007    9575 kubelet_node_status.go:108] "Node was previously registered" node="pause-178300"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.801191    9575 kubelet_node_status.go:73] "Successfully registered node" node="pause-178300"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.804598    9575 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.806814    9575 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.875829    9575 apiserver.go:52] "Watching apiserver"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.881000    9575 topology_manager.go:215] "Topology Admit Handler" podUID="e29bd3e6-d025-4c44-abb4-5f07e243d1d8" podNamespace="kube-system" podName="kube-proxy-7qklg"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.882111    9575 topology_manager.go:215] "Topology Admit Handler" podUID="ba95ef6b-0fe8-41bb-9e19-6ffc57a37b95" podNamespace="kube-system" podName="coredns-5dd5756b68-68vdw"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.929212    9575 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.963978    9575 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e29bd3e6-d025-4c44-abb4-5f07e243d1d8-xtables-lock\") pod \"kube-proxy-7qklg\" (UID: \"e29bd3e6-d025-4c44-abb4-5f07e243d1d8\") " pod="kube-system/kube-proxy-7qklg"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.964109    9575 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e29bd3e6-d025-4c44-abb4-5f07e243d1d8-lib-modules\") pod \"kube-proxy-7qklg\" (UID: \"e29bd3e6-d025-4c44-abb4-5f07e243d1d8\") " pod="kube-system/kube-proxy-7qklg"
	Dec 27 00:09:19 pause-178300 kubelet[9575]: I1227 00:09:19.183297    9575 scope.go:117] "RemoveContainer" containerID="8b6ae4916189a2b9931b38b3bd8e9ba4f8334c7de222b1b6b16dabc009a02703"
	Dec 27 00:09:19 pause-178300 kubelet[9575]: I1227 00:09:19.183716    9575 scope.go:117] "RemoveContainer" containerID="a9088dec4fb61685e8eca1089cd87d8a8766cfc289d89c602cb6674289394b64"
	Dec 27 00:09:55 pause-178300 kubelet[9575]: E1227 00:09:55.053962    9575 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 27 00:09:55 pause-178300 kubelet[9575]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 27 00:09:55 pause-178300 kubelet[9575]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 27 00:09:55 pause-178300 kubelet[9575]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1227 00:09:50.893029   14012 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-178300 -n pause-178300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-178300 -n pause-178300: (12.8119572s)
helpers_test.go:261: (dbg) Run:  kubectl --context pause-178300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-178300 -n pause-178300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-178300 -n pause-178300: (13.9584524s)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-178300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-178300 logs -n 25: (10.9010798s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-923100             | running-upgrade-923100    | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:50 UTC |                     |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| ssh     | force-systemd-env-164200              | force-systemd-env-164200  | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:51 UTC | 26 Dec 23 23:51 UTC |
	|         | ssh docker info --format              |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-env-164200           | force-systemd-env-164200  | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:51 UTC | 26 Dec 23 23:52 UTC |
	| delete  | -p cert-expiration-721200             | cert-expiration-721200    | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:52 UTC | 26 Dec 23 23:53 UTC |
	| start   | -p cert-options-724600                | cert-options-724600       | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:53 UTC | 26 Dec 23 23:59 UTC |
	|         | --memory=2048                         |                           |                   |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |                   |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |                   |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |                   |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |                   |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-183800          | kubernetes-upgrade-183800 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:55 UTC | 26 Dec 23 23:55 UTC |
	| start   | -p kubernetes-upgrade-183800          | kubernetes-upgrade-183800 | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:55 UTC | 27 Dec 23 00:00 UTC |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| delete  | -p running-upgrade-923100             | running-upgrade-923100    | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:55 UTC | 26 Dec 23 23:57 UTC |
	| start   | -p pause-178300 --memory=2048         | pause-178300              | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:57 UTC | 27 Dec 23 00:02 UTC |
	|         | --install-addons=false                |                           |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv            |                           |                   |         |                     |                     |
	| start   | -p stopped-upgrade-682800             | stopped-upgrade-682800    | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:58 UTC |                     |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| ssh     | cert-options-724600 ssh               | cert-options-724600       | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:59 UTC | 26 Dec 23 23:59 UTC |
	|         | openssl x509 -text -noout -in         |                           |                   |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |                   |         |                     |                     |
	| ssh     | -p cert-options-724600 -- sudo        | cert-options-724600       | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:59 UTC | 26 Dec 23 23:59 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |                   |         |                     |                     |
	| delete  | -p cert-options-724600                | cert-options-724600       | minikube1\jenkins | v1.32.0 | 26 Dec 23 23:59 UTC | 27 Dec 23 00:00 UTC |
	| start   | -p docker-flags-107900                | docker-flags-107900       | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:00 UTC | 27 Dec 23 00:06 UTC |
	|         | --cache-images=false                  |                           |                   |         |                     |                     |
	|         | --memory=2048                         |                           |                   |         |                     |                     |
	|         | --install-addons=false                |                           |                   |         |                     |                     |
	|         | --wait=false                          |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |                   |         |                     |                     |
	|         | --docker-opt=debug                    |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-183800          | kubernetes-upgrade-183800 | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:00 UTC |                     |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-183800          | kubernetes-upgrade-183800 | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:00 UTC | 27 Dec 23 00:08 UTC |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| start   | -p pause-178300                       | pause-178300              | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:02 UTC | 27 Dec 23 00:09 UTC |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| delete  | -p stopped-upgrade-682800             | stopped-upgrade-682800    | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:04 UTC | 27 Dec 23 00:04 UTC |
	| start   | -p auto-344500 --memory=3072          | auto-344500               | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:04 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |                   |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| ssh     | docker-flags-107900 ssh               | docker-flags-107900       | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:06 UTC | 27 Dec 23 00:06 UTC |
	|         | sudo systemctl show docker            |                           |                   |         |                     |                     |
	|         | --property=Environment                |                           |                   |         |                     |                     |
	|         | --no-pager                            |                           |                   |         |                     |                     |
	| ssh     | docker-flags-107900 ssh               | docker-flags-107900       | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:06 UTC | 27 Dec 23 00:07 UTC |
	|         | sudo systemctl show docker            |                           |                   |         |                     |                     |
	|         | --property=ExecStart                  |                           |                   |         |                     |                     |
	|         | --no-pager                            |                           |                   |         |                     |                     |
	| delete  | -p docker-flags-107900                | docker-flags-107900       | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:07 UTC | 27 Dec 23 00:07 UTC |
	| start   | -p kindnet-344500                     | kindnet-344500            | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:07 UTC |                     |
	|         | --memory=3072                         |                           |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |                   |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |                   |         |                     |                     |
	|         | --cni=kindnet --driver=hyperv         |                           |                   |         |                     |                     |
	| delete  | -p kubernetes-upgrade-183800          | kubernetes-upgrade-183800 | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:08 UTC | 27 Dec 23 00:08 UTC |
	| start   | -p calico-344500 --memory=3072        | calico-344500             | minikube1\jenkins | v1.32.0 | 27 Dec 23 00:08 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |                   |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |                   |         |                     |                     |
	|         | --cni=calico --driver=hyperv          |                           |                   |         |                     |                     |
	|---------|---------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/27 00:08:50
	Running on machine: minikube1
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 00:08:50.382837   13968 out.go:296] Setting OutFile to fd 1436 ...
	I1227 00:08:50.382837   13968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1227 00:08:50.382837   13968 out.go:309] Setting ErrFile to fd 1384...
	I1227 00:08:50.382837   13968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1227 00:08:50.407825   13968 out.go:303] Setting JSON to false
	I1227 00:08:50.411827   13968 start.go:128] hostinfo: {"hostname":"minikube1","uptime":10129,"bootTime":1703625601,"procs":207,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1227 00:08:50.411827   13968 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1227 00:08:50.415532   13968 out.go:177] * [calico-344500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1227 00:08:50.419948   13968 notify.go:220] Checking for updates...
	I1227 00:08:50.422603   13968 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1227 00:08:50.425227   13968 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 00:08:50.430068   13968 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1227 00:08:50.432737   13968 out.go:177]   - MINIKUBE_LOCATION=17857
	I1227 00:08:50.435282   13968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 00:08:49.017888    8152 main.go:141] libmachine: [stdout =====>] : 
	I1227 00:08:49.017888    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:08:50.020076    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:08:52.647985    9096 ssh_runner.go:235] Completed: docker stop a9088dec4fb6 8b6ae4916189 8171cfbf5ab2 0a5a40aa5e6c d66a1426a5db 3d0e6d0a21fe 60e1fde3ed4d 586dd0e9555c cd0ae2e3e41a 95cda17fa533 154bc9ebb206 a25058c31a5d 3b1e02e78a7c 605244dc8479 12cef0cb54b4 76b1e5be96a2 bfe92142e6fb 19efdc59ee92 1bb2ae6d13ea e262913fa9aa fb06efeea0a1 cb482d23561f: (5.9710838s)
	I1227 00:08:52.673417    9096 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1227 00:08:52.793340    9096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 00:08:52.821741    9096 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Dec 27 00:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Dec 27 00:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Dec 27 00:02 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Dec 27 00:02 /etc/kubernetes/scheduler.conf
	
	I1227 00:08:52.842671    9096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 00:08:52.890998    9096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 00:08:52.948201    9096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 00:08:52.968871    9096 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1227 00:08:52.999248    9096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 00:08:53.061247    9096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 00:08:53.078306    9096 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1227 00:08:53.092363    9096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 00:08:53.124934    9096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 00:08:53.140945    9096 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1227 00:08:53.141040    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:08:53.265706    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:08:50.446414   13968 config.go:182] Loaded profile config "auto-344500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1227 00:08:50.447708   13968 config.go:182] Loaded profile config "kindnet-344500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1227 00:08:50.448625   13968 config.go:182] Loaded profile config "pause-178300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1227 00:08:50.448988   13968 driver.go:392] Setting default libvirt URI to qemu:///system
	I1227 00:08:56.185723   13968 out.go:177] * Using the hyperv driver based on user configuration
	I1227 00:08:56.189745   13968 start.go:298] selected driver: hyperv
	I1227 00:08:56.189745   13968 start.go:902] validating driver "hyperv" against <nil>
	I1227 00:08:56.189745   13968 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 00:08:56.241725   13968 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1227 00:08:56.242729   13968 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 00:08:56.242729   13968 cni.go:84] Creating CNI manager for "calico"
	I1227 00:08:56.242729   13968 start_flags.go:318] Found "Calico" CNI - setting NetworkPlugin=cni
	I1227 00:08:56.242729   13968 start_flags.go:323] config:
	{Name:calico-344500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-344500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1227 00:08:56.243730   13968 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 00:08:56.247731   13968 out.go:177] * Starting control plane node calico-344500 in cluster calico-344500
	I1227 00:08:52.481779    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:08:52.481779    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:08:52.481779    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:08:55.626255    8152 main.go:141] libmachine: [stdout =====>] : 
	I1227 00:08:55.626255    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:08:54.315704    9096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0499976s)
	I1227 00:08:54.315704    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:08:54.656996    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:08:54.782620    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:08:54.896415    9096 api_server.go:52] waiting for apiserver process to appear ...
	I1227 00:08:54.910987    9096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 00:08:54.935090    9096 api_server.go:72] duration metric: took 38.6749ms to wait for apiserver process to appear ...
	I1227 00:08:54.935155    9096 api_server.go:88] waiting for apiserver healthz status ...
	I1227 00:08:54.935232    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:08:56.250732   13968 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1227 00:08:56.250732   13968 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1227 00:08:56.250732   13968 cache.go:56] Caching tarball of preloaded images
	I1227 00:08:56.250732   13968 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1227 00:08:56.250732   13968 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1227 00:08:56.250732   13968 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-344500\config.json ...
	I1227 00:08:56.251756   13968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-344500\config.json: {Name:mkf23e75ec2de1c49255a17e9b45e97016c94c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 00:08:56.252721   13968 start.go:365] acquiring machines lock for calico-344500: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1227 00:08:56.628847    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:08:59.433854    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:08:59.433854    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:08:59.433854    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:08:59.942971    9096 api_server.go:269] stopped: https://172.21.179.115:8443/healthz: Get "https://172.21.179.115:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1227 00:08:59.942971    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:02.065432    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:02.065432    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:02.065432    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:04.265216    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:04.265216    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:04.265216    8152 machine.go:88] provisioning docker machine ...
	I1227 00:09:04.265216    8152 buildroot.go:166] provisioning hostname "auto-344500"
	I1227 00:09:04.265216    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:04.951424    9096 api_server.go:269] stopped: https://172.21.179.115:8443/healthz: Get "https://172.21.179.115:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1227 00:09:04.951493    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:06.471400    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:06.471611    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:06.471611    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:09.027461    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:09.027461    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:09.034637    8152 main.go:141] libmachine: Using SSH client type: native
	I1227 00:09:09.035454    8152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.177.64 22 <nil> <nil>}
	I1227 00:09:09.035454    8152 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-344500 && echo "auto-344500" | sudo tee /etc/hostname
	I1227 00:09:09.198189    8152 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-344500
	
	I1227 00:09:09.198325    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:11.382296    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:11.382296    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:11.382296    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:09.962563    9096 api_server.go:269] stopped: https://172.21.179.115:8443/healthz: Get "https://172.21.179.115:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1227 00:09:09.962645    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:12.806412    9096 api_server.go:269] stopped: https://172.21.179.115:8443/healthz: Get "https://172.21.179.115:8443/healthz": read tcp 172.21.176.1:62991->172.21.179.115:8443: wsarecv: An existing connection was forcibly closed by the remote host.
	I1227 00:09:12.806412    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:13.978590    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:13.978632    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:13.985187    8152 main.go:141] libmachine: Using SSH client type: native
	I1227 00:09:13.985964    8152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.177.64 22 <nil> <nil>}
	I1227 00:09:13.986044    8152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-344500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-344500/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-344500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 00:09:14.139730    8152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1227 00:09:14.139827    8152 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1227 00:09:14.139884    8152 buildroot.go:174] setting up certificates
	I1227 00:09:14.139939    8152 provision.go:83] configureAuth start
	I1227 00:09:14.139989    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:16.365417    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:16.365483    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:16.365483    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:14.831817    9096 api_server.go:269] stopped: https://172.21.179.115:8443/healthz: Get "https://172.21.179.115:8443/healthz": dial tcp 172.21.179.115:8443: connectex: No connection could be made because the target machine actively refused it.
	I1227 00:09:14.831817    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:18.558135    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 00:09:18.558135    9096 api_server.go:103] status: https://172.21.179.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 00:09:18.558135    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:18.601624    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 00:09:18.602050    9096 api_server.go:103] status: https://172.21.179.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 00:09:18.940244    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:18.951278    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1227 00:09:18.951508    9096 api_server.go:103] status: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:09:19.450382    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:19.459806    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1227 00:09:19.460083    9096 api_server.go:103] status: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:09:19.940696    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:19.964611    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1227 00:09:19.964611    9096 api_server.go:103] status: https://172.21.179.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1227 00:09:20.448905    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:20.456907    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 200:
	ok
	I1227 00:09:20.472245    9096 api_server.go:141] control plane version: v1.28.4
	I1227 00:09:20.472351    9096 api_server.go:131] duration metric: took 25.5372053s to wait for apiserver health ...
	I1227 00:09:20.472405    9096 cni.go:84] Creating CNI manager for ""
	I1227 00:09:20.472405    9096 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 00:09:20.475383    9096 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1227 00:09:19.038167    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:19.038167    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:19.038167    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:21.271802    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:21.272078    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:21.272190    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:20.490174    9096 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1227 00:09:20.507349    9096 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1227 00:09:20.534579    9096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 00:09:20.548022    9096 system_pods.go:59] 6 kube-system pods found
	I1227 00:09:20.548022    9096 system_pods.go:61] "coredns-5dd5756b68-68vdw" [ba95ef6b-0fe8-41bb-9e19-6ffc57a37b95] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 00:09:20.548022    9096 system_pods.go:61] "etcd-pause-178300" [e7cce9d1-ebf4-4040-96ad-75bd234d231e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 00:09:20.548022    9096 system_pods.go:61] "kube-apiserver-pause-178300" [2ff43027-e352-4053-9aa6-ec12574be43d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 00:09:20.548022    9096 system_pods.go:61] "kube-controller-manager-pause-178300" [07ac7fdb-30d0-4a99-8a5a-af07da97d915] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 00:09:20.548022    9096 system_pods.go:61] "kube-proxy-7qklg" [e29bd3e6-d025-4c44-abb4-5f07e243d1d8] Running
	I1227 00:09:20.548022    9096 system_pods.go:61] "kube-scheduler-pause-178300" [0247a377-a670-4bd4-a997-0a86af2466b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 00:09:20.548022    9096 system_pods.go:74] duration metric: took 13.4432ms to wait for pod list to return data ...
	I1227 00:09:20.548022    9096 node_conditions.go:102] verifying NodePressure condition ...
	I1227 00:09:20.554098    9096 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1227 00:09:20.554098    9096 node_conditions.go:123] node cpu capacity is 2
	I1227 00:09:20.554098    9096 node_conditions.go:105] duration metric: took 6.0764ms to run NodePressure ...
	I1227 00:09:20.554098    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1227 00:09:20.934651    9096 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1227 00:09:20.946519    9096 kubeadm.go:787] kubelet initialised
	I1227 00:09:20.946519    9096 kubeadm.go:788] duration metric: took 11.8026ms waiting for restarted kubelet to initialise ...
	I1227 00:09:20.946694    9096 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1227 00:09:20.955786    9096 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:21.473959    9096 pod_ready.go:92] pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:21.473959    9096 pod_ready.go:81] duration metric: took 518.1186ms waiting for pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:21.473959    9096 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:23.496968    9096 pod_ready.go:102] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"False"
	I1227 00:09:23.848172    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:23.848172    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:23.848372    8152 provision.go:138] copyHostCerts
	I1227 00:09:23.848953    8152 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1227 00:09:23.849061    8152 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1227 00:09:23.849738    8152 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I1227 00:09:23.851649    8152 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1227 00:09:23.851746    8152 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1227 00:09:23.852057    8152 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1227 00:09:23.853506    8152 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1227 00:09:23.853617    8152 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1227 00:09:23.853977    8152 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1227 00:09:23.855350    8152 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.auto-344500 san=[172.21.177.64 172.21.177.64 localhost 127.0.0.1 minikube auto-344500]
	I1227 00:09:23.939923    8152 provision.go:172] copyRemoteCerts
	I1227 00:09:23.956045    8152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 00:09:23.956207    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:26.122882    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:26.122960    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:26.123053    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:25.991427    9096 pod_ready.go:102] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"False"
	I1227 00:09:28.000016    9096 pod_ready.go:102] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"False"
	I1227 00:09:28.775042    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:28.775042    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:28.775346    8152 sshutil.go:53] new ssh client: &{IP:172.21.177.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\auto-344500\id_rsa Username:docker}
	I1227 00:09:28.884647    8152 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.928456s)
	I1227 00:09:28.885486    8152 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1227 00:09:28.925576    8152 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 00:09:28.968538    8152 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 00:09:29.009880    8152 provision.go:86] duration metric: configureAuth took 14.869946s
	I1227 00:09:29.009946    8152 buildroot.go:189] setting minikube options for container-runtime
	I1227 00:09:29.010132    8152 config.go:182] Loaded profile config "auto-344500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1227 00:09:29.010132    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:31.186466    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:31.186466    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:31.186537    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:30.498627    9096 pod_ready.go:102] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"False"
	I1227 00:09:32.993816    9096 pod_ready.go:102] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"False"
	I1227 00:09:34.025638    9096 pod_ready.go:92] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.025638    9096 pod_ready.go:81] duration metric: took 12.5516836s waiting for pod "etcd-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.025638    9096 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.056983    9096 pod_ready.go:92] pod "kube-apiserver-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.056983    9096 pod_ready.go:81] duration metric: took 31.3452ms waiting for pod "kube-apiserver-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.056983    9096 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.075800    9096 pod_ready.go:92] pod "kube-controller-manager-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.075800    9096 pod_ready.go:81] duration metric: took 18.8162ms waiting for pod "kube-controller-manager-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.075800    9096 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7qklg" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.100634    9096 pod_ready.go:92] pod "kube-proxy-7qklg" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.100634    9096 pod_ready.go:81] duration metric: took 24.8343ms waiting for pod "kube-proxy-7qklg" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.100634    9096 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.108368    9096 pod_ready.go:92] pod "kube-scheduler-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.108368    9096 pod_ready.go:81] duration metric: took 7.734ms waiting for pod "kube-scheduler-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.108368    9096 pod_ready.go:38] duration metric: took 13.1616782s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1227 00:09:34.108368    9096 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 00:09:34.129956    9096 ops.go:34] apiserver oom_adj: -16
	I1227 00:09:34.130036    9096 kubeadm.go:640] restartCluster took 1m6.5450256s
	I1227 00:09:34.130081    9096 kubeadm.go:406] StartCluster complete in 1m6.615372s
	I1227 00:09:34.130146    9096 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 00:09:34.130346    9096 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1227 00:09:34.131613    9096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 00:09:34.132872    9096 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 00:09:34.132872    9096 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1227 00:09:34.136858    9096 out.go:177] * Enabled addons: 
	I1227 00:09:34.133793    9096 config.go:182] Loaded profile config "pause-178300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1227 00:09:34.139248    9096 addons.go:508] enable addons completed in 6.3757ms: enabled=[]
	I1227 00:09:34.148140    9096 kapi.go:59] client config for pause-178300: &rest.Config{Host:"https://172.21.179.115:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-178300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-178300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2052b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 00:09:34.153743    9096 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-178300" context rescaled to 1 replicas
	I1227 00:09:34.153871    9096 start.go:223] Will wait 6m0s for node &{Name: IP:172.21.179.115 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 00:09:34.156801    9096 out.go:177] * Verifying Kubernetes components...
	I1227 00:09:34.171210    9096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 00:09:33.798391    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:33.798659    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:33.804708    8152 main.go:141] libmachine: Using SSH client type: native
	I1227 00:09:33.805446    8152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.177.64 22 <nil> <nil>}
	I1227 00:09:33.805611    8152 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 00:09:33.947210    8152 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1227 00:09:33.947210    8152 buildroot.go:70] root file system type: tmpfs
	I1227 00:09:33.947210    8152 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 00:09:33.947210    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:36.138744    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:36.138843    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:36.138843    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:34.278688    9096 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1227 00:09:34.278744    9096 node_ready.go:35] waiting up to 6m0s for node "pause-178300" to be "Ready" ...
	I1227 00:09:34.283361    9096 node_ready.go:49] node "pause-178300" has status "Ready":"True"
	I1227 00:09:34.283361    9096 node_ready.go:38] duration metric: took 4.6162ms waiting for node "pause-178300" to be "Ready" ...
	I1227 00:09:34.283361    9096 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1227 00:09:34.405767    9096 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.795719    9096 pod_ready.go:92] pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:34.795719    9096 pod_ready.go:81] duration metric: took 389.9522ms waiting for pod "coredns-5dd5756b68-68vdw" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:34.795719    9096 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:35.194062    9096 pod_ready.go:92] pod "etcd-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:35.194126    9096 pod_ready.go:81] duration metric: took 398.4075ms waiting for pod "etcd-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:35.194126    9096 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:35.590396    9096 pod_ready.go:92] pod "kube-apiserver-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:35.590396    9096 pod_ready.go:81] duration metric: took 396.1924ms waiting for pod "kube-apiserver-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:35.590498    9096 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.001547    9096 pod_ready.go:92] pod "kube-controller-manager-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:36.001547    9096 pod_ready.go:81] duration metric: took 411.0488ms waiting for pod "kube-controller-manager-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.001547    9096 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7qklg" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.396452    9096 pod_ready.go:92] pod "kube-proxy-7qklg" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:36.396452    9096 pod_ready.go:81] duration metric: took 394.9057ms waiting for pod "kube-proxy-7qklg" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.396452    9096 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.794869    9096 pod_ready.go:92] pod "kube-scheduler-pause-178300" in "kube-system" namespace has status "Ready":"True"
	I1227 00:09:36.794869    9096 pod_ready.go:81] duration metric: took 398.4165ms waiting for pod "kube-scheduler-pause-178300" in "kube-system" namespace to be "Ready" ...
	I1227 00:09:36.794940    9096 pod_ready.go:38] duration metric: took 2.5115805s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1227 00:09:36.794940    9096 api_server.go:52] waiting for apiserver process to appear ...
	I1227 00:09:36.807933    9096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 00:09:36.829983    9096 api_server.go:72] duration metric: took 2.6761123s to wait for apiserver process to appear ...
	I1227 00:09:36.829983    9096 api_server.go:88] waiting for apiserver healthz status ...
	I1227 00:09:36.829983    9096 api_server.go:253] Checking apiserver healthz at https://172.21.179.115:8443/healthz ...
	I1227 00:09:36.840216    9096 api_server.go:279] https://172.21.179.115:8443/healthz returned 200:
	ok
	I1227 00:09:36.841949    9096 api_server.go:141] control plane version: v1.28.4
	I1227 00:09:36.842799    9096 api_server.go:131] duration metric: took 12.8162ms to wait for apiserver health ...
	I1227 00:09:36.842799    9096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 00:09:37.005545    9096 system_pods.go:59] 6 kube-system pods found
	I1227 00:09:37.005658    9096 system_pods.go:61] "coredns-5dd5756b68-68vdw" [ba95ef6b-0fe8-41bb-9e19-6ffc57a37b95] Running
	I1227 00:09:37.005658    9096 system_pods.go:61] "etcd-pause-178300" [e7cce9d1-ebf4-4040-96ad-75bd234d231e] Running
	I1227 00:09:37.005658    9096 system_pods.go:61] "kube-apiserver-pause-178300" [2ff43027-e352-4053-9aa6-ec12574be43d] Running
	I1227 00:09:37.005658    9096 system_pods.go:61] "kube-controller-manager-pause-178300" [07ac7fdb-30d0-4a99-8a5a-af07da97d915] Running
	I1227 00:09:37.005658    9096 system_pods.go:61] "kube-proxy-7qklg" [e29bd3e6-d025-4c44-abb4-5f07e243d1d8] Running
	I1227 00:09:37.005658    9096 system_pods.go:61] "kube-scheduler-pause-178300" [0247a377-a670-4bd4-a997-0a86af2466b7] Running
	I1227 00:09:37.005658    9096 system_pods.go:74] duration metric: took 162.8589ms to wait for pod list to return data ...
	I1227 00:09:37.005658    9096 default_sa.go:34] waiting for default service account to be created ...
	I1227 00:09:37.191345    9096 default_sa.go:45] found service account: "default"
	I1227 00:09:37.191345    9096 default_sa.go:55] duration metric: took 185.5493ms for default service account to be created ...
	I1227 00:09:37.191473    9096 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 00:09:37.404717    9096 system_pods.go:86] 6 kube-system pods found
	I1227 00:09:37.404717    9096 system_pods.go:89] "coredns-5dd5756b68-68vdw" [ba95ef6b-0fe8-41bb-9e19-6ffc57a37b95] Running
	I1227 00:09:37.404717    9096 system_pods.go:89] "etcd-pause-178300" [e7cce9d1-ebf4-4040-96ad-75bd234d231e] Running
	I1227 00:09:37.404796    9096 system_pods.go:89] "kube-apiserver-pause-178300" [2ff43027-e352-4053-9aa6-ec12574be43d] Running
	I1227 00:09:37.404796    9096 system_pods.go:89] "kube-controller-manager-pause-178300" [07ac7fdb-30d0-4a99-8a5a-af07da97d915] Running
	I1227 00:09:37.404796    9096 system_pods.go:89] "kube-proxy-7qklg" [e29bd3e6-d025-4c44-abb4-5f07e243d1d8] Running
	I1227 00:09:37.404796    9096 system_pods.go:89] "kube-scheduler-pause-178300" [0247a377-a670-4bd4-a997-0a86af2466b7] Running
	I1227 00:09:37.404796    9096 system_pods.go:126] duration metric: took 213.3231ms to wait for k8s-apps to be running ...
	I1227 00:09:37.404856    9096 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 00:09:37.417809    9096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 00:09:37.443598    9096 system_svc.go:56] duration metric: took 38.3087ms WaitForService to wait for kubelet.
	I1227 00:09:37.443598    9096 kubeadm.go:581] duration metric: took 3.2897272s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1227 00:09:37.443598    9096 node_conditions.go:102] verifying NodePressure condition ...
	I1227 00:09:37.604240    9096 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1227 00:09:37.604356    9096 node_conditions.go:123] node cpu capacity is 2
	I1227 00:09:37.604356    9096 node_conditions.go:105] duration metric: took 160.7584ms to run NodePressure ...
	I1227 00:09:37.604356    9096 start.go:228] waiting for startup goroutines ...
	I1227 00:09:37.604356    9096 start.go:233] waiting for cluster config update ...
	I1227 00:09:37.604356    9096 start.go:242] writing updated cluster config ...
	I1227 00:09:37.619816    9096 ssh_runner.go:195] Run: rm -f paused
	I1227 00:09:37.784765    9096 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1227 00:09:37.789081    9096 out.go:177] * Done! kubectl is now configured to use "pause-178300" cluster and "default" namespace by default
	I1227 00:09:38.848756    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:38.848756    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:38.853521    8152 main.go:141] libmachine: Using SSH client type: native
	I1227 00:09:38.854812    8152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.177.64 22 <nil> <nil>}
	I1227 00:09:38.854948    8152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 00:09:39.005637    8152 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 00:09:39.005738    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:41.217667    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:41.217894    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:41.218178    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:43.853884    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:43.853884    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:43.858608    8152 main.go:141] libmachine: Using SSH client type: native
	I1227 00:09:43.859237    8152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.177.64 22 <nil> <nil>}
	I1227 00:09:43.859237    8152 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 00:09:45.037067    8152 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1227 00:09:45.037128    8152 machine.go:91] provisioned docker machine in 40.7719258s
	I1227 00:09:45.037190    8152 client.go:171] LocalClient.Create took 1m55.9465223s
	I1227 00:09:45.037190    8152 start.go:167] duration metric: libmachine.API.Create for "auto-344500" took 1m55.9472234s
	I1227 00:09:45.037342    8152 start.go:300] post-start starting for "auto-344500" (driver="hyperv")
	I1227 00:09:45.037401    8152 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 00:09:45.051345    8152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 00:09:45.051708    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:47.279535    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:47.279535    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:47.279694    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:49.956051    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:49.956158    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:49.956358    8152 sshutil.go:53] new ssh client: &{IP:172.21.177.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\auto-344500\id_rsa Username:docker}
	I1227 00:09:50.070063    8152 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0186242s)
	I1227 00:09:50.087808    8152 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 00:09:50.096614    8152 info.go:137] Remote host: Buildroot 2021.02.12
	I1227 00:09:50.096614    8152 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1227 00:09:50.096614    8152 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1227 00:09:50.098287    8152 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem -> 107282.pem in /etc/ssl/certs
	I1227 00:09:50.113368    8152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 00:09:50.130462    8152 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\107282.pem --> /etc/ssl/certs/107282.pem (1708 bytes)
	I1227 00:09:50.173023    8152 start.go:303] post-start completed in 5.1356236s
	I1227 00:09:50.176380    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:52.392067    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:52.392152    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:52.392328    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:55.039464    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:55.039534    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:55.039534    8152 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\auto-344500\config.json ...
	I1227 00:09:55.043017    8152 start.go:128] duration metric: createHost completed in 2m5.9591489s
	I1227 00:09:55.043156    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:09:57.235199    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:09:57.235269    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:57.235269    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:09:59.983405    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:09:59.983405    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:09:59.988400    8152 main.go:141] libmachine: Using SSH client type: native
	I1227 00:09:59.989416    8152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.177.64 22 <nil> <nil>}
	I1227 00:09:59.989416    8152 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1227 00:10:00.130457    8152 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703635800.122161209
	
	I1227 00:10:00.130457    8152 fix.go:206] guest clock: 1703635800.122161209
	I1227 00:10:00.130457    8152 fix.go:219] Guest: 2023-12-27 00:10:00.122161209 +0000 UTC Remote: 2023-12-27 00:09:55.0430179 +0000 UTC m=+323.823310401 (delta=5.079143309s)
	I1227 00:10:00.130457    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:10:05.127070    2120 start.go:369] acquired machines lock for "kindnet-344500" in 2m13.4450405s
	I1227 00:10:05.127070    2120 start.go:93] Provisioning new machine with config: &{Name:kindnet-344500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-344500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 00:10:05.127070    2120 start.go:125] createHost starting for "" (driver="hyperv")
	I1227 00:10:05.132416    2120 out.go:204] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1227 00:10:05.132970    2120 start.go:159] libmachine.API.Create for "kindnet-344500" (driver="hyperv")
	I1227 00:10:05.133049    2120 client.go:168] LocalClient.Create starting
	I1227 00:10:05.133833    2120 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1227 00:10:05.134087    2120 main.go:141] libmachine: Decoding PEM data...
	I1227 00:10:05.134158    2120 main.go:141] libmachine: Parsing certificate...
	I1227 00:10:05.134466    2120 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1227 00:10:05.134742    2120 main.go:141] libmachine: Decoding PEM data...
	I1227 00:10:05.134856    2120 main.go:141] libmachine: Parsing certificate...
	I1227 00:10:05.134993    2120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1227 00:10:02.352239    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:10:02.352239    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:10:02.352396    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:10:04.968233    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:10:04.968233    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:10:04.975057    8152 main.go:141] libmachine: Using SSH client type: native
	I1227 00:10:04.975848    8152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc65420] 0xc67f60 <nil>  [] 0s} 172.21.177.64 22 <nil> <nil>}
	I1227 00:10:04.975848    8152 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1703635800
	I1227 00:10:05.126476    8152 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Dec 27 00:10:00 UTC 2023
	
	I1227 00:10:05.126476    8152 fix.go:226] clock set: Wed Dec 27 00:10:00 UTC 2023
	 (err=<nil>)
	I1227 00:10:05.126476    8152 start.go:83] releasing machines lock for "auto-344500", held for 2m16.0431585s
	I1227 00:10:05.127015    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:10:07.209675    2120 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1227 00:10:07.209675    2120 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:10:07.209814    2120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1227 00:10:09.075436    2120 main.go:141] libmachine: [stdout =====>] : False
	
	I1227 00:10:09.075560    2120 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:10:09.075603    2120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1227 00:10:07.399681    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:10:07.399875    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:10:07.399875    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:10:10.142824    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:10:10.142910    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:10:10.148107    8152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 00:10:10.148107    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:10:10.162725    8152 ssh_runner.go:195] Run: cat /version.json
	I1227 00:10:10.163717    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-344500 ).state
	I1227 00:10:10.744808    2120 main.go:141] libmachine: [stdout =====>] : True
	
	I1227 00:10:10.744942    2120 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:10:10.745540    2120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1227 00:10:14.842525    2120 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1227 00:10:14.842695    2120 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:10:14.844824    2120 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I1227 00:10:15.305291    2120 main.go:141] libmachine: Creating SSH key...
	I1227 00:10:15.418908    2120 main.go:141] libmachine: Creating VM...
	I1227 00:10:15.419906    2120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1227 00:10:12.552000    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:10:12.552162    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:10:12.552076    8152 main.go:141] libmachine: [stdout =====>] : Running
	
	I1227 00:10:12.552162    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:10:12.552162    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:10:12.552162    8152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-344500 ).networkadapters[0]).ipaddresses[0]
	I1227 00:10:15.519251    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:10:15.519558    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:10:15.519871    8152 sshutil.go:53] new ssh client: &{IP:172.21.177.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\auto-344500\id_rsa Username:docker}
	I1227 00:10:15.556714    8152 main.go:141] libmachine: [stdout =====>] : 172.21.177.64
	
	I1227 00:10:15.557713    8152 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:10:15.558046    8152 sshutil.go:53] new ssh client: &{IP:172.21.177.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\auto-344500\id_rsa Username:docker}
	I1227 00:10:15.623134    8152 ssh_runner.go:235] Completed: cat /version.json: (5.4604114s)
	I1227 00:10:15.636601    8152 ssh_runner.go:195] Run: systemctl --version
	I1227 00:10:15.716738    8152 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.568534s)
	I1227 00:10:15.733256    8152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 00:10:15.741898    8152 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 00:10:15.758817    8152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 00:10:15.785600    8152 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 00:10:15.785689    8152 start.go:475] detecting cgroup driver to use...
	I1227 00:10:15.786035    8152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 00:10:15.835092    8152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1227 00:10:15.871676    8152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 00:10:15.893683    8152 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1227 00:10:15.905678    8152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1227 00:10:15.948676    8152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 00:10:15.996693    8152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 00:10:16.032496    8152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 00:10:16.066183    8152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 00:10:16.124641    8152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 00:10:16.157621    8152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 00:10:16.189745    8152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 00:10:16.234078    8152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 00:10:18.780865    2120 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1227 00:10:18.780956    2120 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:10:18.781075    2120 main.go:141] libmachine: Using switch "Default Switch"
	I1227 00:10:18.781163    2120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1227 00:10:16.430353    8152 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 00:10:16.596811    8152 start.go:475] detecting cgroup driver to use...
	I1227 00:10:16.613224    8152 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 00:10:16.660062    8152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 00:10:16.710319    8152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 00:10:16.758887    8152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 00:10:16.793970    8152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 00:10:16.828264    8152 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 00:10:16.899232    8152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 00:10:16.920246    8152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 00:10:16.967212    8152 ssh_runner.go:195] Run: which cri-dockerd
	I1227 00:10:16.997013    8152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 00:10:17.014016    8152 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1227 00:10:17.060933    8152 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 00:10:17.248749    8152 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 00:10:17.416264    8152 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1227 00:10:17.416301    8152 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1227 00:10:17.459919    8152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 00:10:17.662635    8152 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 00:10:19.373090    8152 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.7104549s)
	I1227 00:10:19.385092    8152 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 00:10:19.577939    8152 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1227 00:10:19.770415    8152 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 00:10:19.967671    8152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 00:10:20.175474    8152 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1227 00:10:20.228933    8152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 00:10:20.415555    8152 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1227 00:10:20.529612    8152 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1227 00:10:20.543019    8152 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1227 00:10:20.552024    8152 start.go:543] Will wait 60s for crictl version
	I1227 00:10:20.567817    8152 ssh_runner.go:195] Run: which crictl
	I1227 00:10:20.589604    8152 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1227 00:10:20.677870    8152 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1227 00:10:20.691195    8152 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 00:10:20.742800    8152 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 00:10:20.790675    8152 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1227 00:10:20.790793    8152 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1227 00:10:20.802957    8152 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1227 00:10:20.802957    8152 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1227 00:10:20.803060    8152 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1227 00:10:20.803060    8152 ip.go:207] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4e:ec:d4 Flags:up|broadcast|multicast|running}
	I1227 00:10:20.808042    8152 ip.go:210] interface addr: fe80::1f69:6bdb:2000:8fcd/64
	I1227 00:10:20.808042    8152 ip.go:210] interface addr: 172.21.176.1/20
	I1227 00:10:20.823515    8152 ssh_runner.go:195] Run: grep 172.21.176.1	host.minikube.internal$ /etc/hosts
	I1227 00:10:20.830377    8152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.21.176.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 00:10:20.852605    8152 localpath.go:92] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\client.crt -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\auto-344500\client.crt
	I1227 00:10:20.855125    8152 localpath.go:117] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\client.key -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\auto-344500\client.key
	I1227 00:10:20.857958    8152 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1227 00:10:20.875478    8152 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 00:10:20.906778    8152 docker.go:671] Got preloaded images: 
	I1227 00:10:20.906778    8152 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1227 00:10:20.918493    8152 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1227 00:10:20.949583    8152 ssh_runner.go:195] Run: which lz4
	I1227 00:10:20.969438    8152 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1227 00:10:20.976160    8152 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1227 00:10:20.976160    8152 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1227 00:10:20.711827    2120 main.go:141] libmachine: [stdout =====>] : True
	
	I1227 00:10:20.712029    2120 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:10:20.712029    2120 main.go:141] libmachine: Creating VHD
	I1227 00:10:20.712157    2120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kindnet-344500\fixed.vhd' -SizeBytes 10MB -Fixed
	I1227 00:10:24.862855    2120 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kindnet-344500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 836DA198-FA38-47EE-A204-D3E37A398ED6
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1227 00:10:24.862971    2120 main.go:141] libmachine: [stderr =====>] : 
	I1227 00:10:24.862971    2120 main.go:141] libmachine: Writing magic tar header
	I1227 00:10:24.863150    2120 main.go:141] libmachine: Writing SSH key tar header
	I1227 00:10:24.871959    2120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kindnet-344500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kindnet-344500\disk.vhd' -VHDType Dynamic -DeleteSource
	I1227 00:10:23.464642    8152 docker.go:635] Took 2.508191 seconds to copy over tarball
	I1227 00:10:23.478905    8152 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	
	
	==> Docker <==
	-- Journal begins at Wed 2023-12-27 00:00:28 UTC, ends at Wed 2023-12-27 00:10:36 UTC. --
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.262132511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.262400111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.262433811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.262451011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.322352309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.322590609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.322683309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.322765109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.352445758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.352564458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.353005557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.353040357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.925367688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.925876487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.926173087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 27 00:09:14 pause-178300 dockerd[7370]: time="2023-12-27T00:09:14.926929785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:18 pause-178300 cri-dockerd[7640]: time="2023-12-27T00:09:18Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Dec 27 00:09:19 pause-178300 dockerd[7370]: time="2023-12-27T00:09:19.402379253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 27 00:09:19 pause-178300 dockerd[7370]: time="2023-12-27T00:09:19.402524053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:19 pause-178300 dockerd[7370]: time="2023-12-27T00:09:19.402545853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 27 00:09:19 pause-178300 dockerd[7370]: time="2023-12-27T00:09:19.402559853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:19 pause-178300 dockerd[7370]: time="2023-12-27T00:09:19.413350935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 27 00:09:19 pause-178300 dockerd[7370]: time="2023-12-27T00:09:19.413656535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 27 00:09:19 pause-178300 dockerd[7370]: time="2023-12-27T00:09:19.413910734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 27 00:09:19 pause-178300 dockerd[7370]: time="2023-12-27T00:09:19.414099634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f5b7bbc89d6ae       83f6cc407eed8       About a minute ago   Running             kube-proxy                2                   7a23a955bf12a       kube-proxy-7qklg
	57f089f59d89d       ead0a4a53df89       About a minute ago   Running             coredns                   2                   99a37e2a3c12b       coredns-5dd5756b68-68vdw
	98ec605b2087a       7fe0e6f37db33       About a minute ago   Running             kube-apiserver            3                   78cfdb6cbb5f8       kube-apiserver-pause-178300
	15fd97aaef6c3       e3db313c6dbc0       About a minute ago   Running             kube-scheduler            2                   9c77f16148701       kube-scheduler-pause-178300
	fbb65e0aaa078       d058aa5ab969c       About a minute ago   Running             kube-controller-manager   2                   4f1b8d0295f64       kube-controller-manager-pause-178300
	3e3ed1fc66542       73deb9a3f7025       About a minute ago   Running             etcd                      2                   c172fb604a8a3       etcd-pause-178300
	9cfb1e684b435       7fe0e6f37db33       About a minute ago   Exited              kube-apiserver            2                   78cfdb6cbb5f8       kube-apiserver-pause-178300
	a9088dec4fb61       83f6cc407eed8       2 minutes ago        Exited              kube-proxy                1                   a25058c31a5dc       kube-proxy-7qklg
	8b6ae4916189a       ead0a4a53df89       2 minutes ago        Exited              coredns                   1                   60e1fde3ed4de       coredns-5dd5756b68-68vdw
	8171cfbf5ab26       e3db313c6dbc0       2 minutes ago        Exited              kube-scheduler            1                   154bc9ebb206a       kube-scheduler-pause-178300
	0a5a40aa5e6ce       73deb9a3f7025       2 minutes ago        Exited              etcd                      1                   95cda17fa5332       etcd-pause-178300
	d66a1426a5dba       d058aa5ab969c       2 minutes ago        Exited              kube-controller-manager   1                   586dd0e9555c7       kube-controller-manager-pause-178300
	
	
	==> coredns [57f089f59d89] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 727859d02e49f226305255353d6ea73d4e25f577656e92efc00f8bdfe7b9e0a41c48e607fb0e54b875432612a89a9ff227ec88b4a4c86d52ce98698e96c5359a
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52475 - 64947 "HINFO IN 2163937736029644199.1187910723709867958. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.05508431s
	
	
	==> coredns [8b6ae4916189] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 727859d02e49f226305255353d6ea73d4e25f577656e92efc00f8bdfe7b9e0a41c48e607fb0e54b875432612a89a9ff227ec88b4a4c86d52ce98698e96c5359a
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60953 - 40417 "HINFO IN 4534999005635974088.6528223528083456404. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.056562601s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-178300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-178300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=pause-178300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_27T00_02_31_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Dec 2023 00:02:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-178300
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Dec 2023 00:10:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Dec 2023 00:09:18 +0000   Wed, 27 Dec 2023 00:02:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Dec 2023 00:09:18 +0000   Wed, 27 Dec 2023 00:02:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Dec 2023 00:09:18 +0000   Wed, 27 Dec 2023 00:02:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Dec 2023 00:09:18 +0000   Wed, 27 Dec 2023 00:02:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.21.179.115
	  Hostname:    pause-178300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	System Info:
	  Machine ID:                 007ddc59221046b5828896e1dada895d
	  System UUID:                fc88b303-7776-3d46-9824-c9f0cd224106
	  Boot ID:                    f7dc9f5a-452c-4536-93d1-5c26dc3a92c6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-68vdw                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m54s
	  kube-system                 etcd-pause-178300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         8m5s
	  kube-system                 kube-apiserver-pause-178300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 kube-controller-manager-pause-178300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 kube-proxy-7qklg                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m54s
	  kube-system                 kube-scheduler-pause-178300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m50s                  kube-proxy       
	  Normal  Starting                 77s                    kube-proxy       
	  Normal  Starting                 118s                   kube-proxy       
	  Normal  Starting                 8m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m17s (x8 over 8m17s)  kubelet          Node pause-178300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m17s (x8 over 8m17s)  kubelet          Node pause-178300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m17s (x7 over 8m17s)  kubelet          Node pause-178300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m6s                   kubelet          Node pause-178300 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m6s                   kubelet          Node pause-178300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m6s                   kubelet          Node pause-178300 status is now: NodeHasSufficientMemory
	  Normal  Starting                 8m6s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m5s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m1s                   kubelet          Node pause-178300 status is now: NodeReady
	  Normal  RegisteredNode           7m55s                  node-controller  Node pause-178300 event: Registered Node pause-178300 in Controller
	  Normal  Starting                 103s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x8 over 102s)    kubelet          Node pause-178300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 102s)    kubelet          Node pause-178300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x7 over 102s)    kubelet          Node pause-178300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  102s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           66s                    node-controller  Node pause-178300 event: Registered Node pause-178300 in Controller
	
	
	==> dmesg <==
	[  +0.176100] systemd-fstab-generator[981]: Ignoring "noauto" for root device
	[  +0.201900] systemd-fstab-generator[994]: Ignoring "noauto" for root device
	[  +1.386701] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.421274] systemd-fstab-generator[1152]: Ignoring "noauto" for root device
	[  +0.196228] systemd-fstab-generator[1163]: Ignoring "noauto" for root device
	[  +0.188844] systemd-fstab-generator[1174]: Ignoring "noauto" for root device
	[  +0.193436] systemd-fstab-generator[1185]: Ignoring "noauto" for root device
	[  +0.239791] systemd-fstab-generator[1199]: Ignoring "noauto" for root device
	[Dec27 00:02] systemd-fstab-generator[1306]: Ignoring "noauto" for root device
	[  +2.294244] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.638530] systemd-fstab-generator[1688]: Ignoring "noauto" for root device
	[  +0.971815] kauditd_printk_skb: 29 callbacks suppressed
	[ +11.067719] systemd-fstab-generator[2712]: Ignoring "noauto" for root device
	[Dec27 00:08] systemd-fstab-generator[6914]: Ignoring "noauto" for root device
	[  +0.630962] systemd-fstab-generator[6953]: Ignoring "noauto" for root device
	[  +0.283846] systemd-fstab-generator[6964]: Ignoring "noauto" for root device
	[  +0.318195] systemd-fstab-generator[6986]: Ignoring "noauto" for root device
	[  +0.325178] kauditd_printk_skb: 23 callbacks suppressed
	[ +11.923220] systemd-fstab-generator[7522]: Ignoring "noauto" for root device
	[  +0.223646] systemd-fstab-generator[7533]: Ignoring "noauto" for root device
	[  +0.200911] systemd-fstab-generator[7544]: Ignoring "noauto" for root device
	[  +0.190045] systemd-fstab-generator[7555]: Ignoring "noauto" for root device
	[  +0.263826] systemd-fstab-generator[7575]: Ignoring "noauto" for root device
	[  +8.042695] kauditd_printk_skb: 29 callbacks suppressed
	[ +20.946618] systemd-fstab-generator[9569]: Ignoring "noauto" for root device
	
	
	==> etcd [0a5a40aa5e6c] <==
	{"level":"warn","ts":"2023-12-27T00:08:47.051579Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.349828ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1116202938423774998 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-pause-178300.17a487a4173b4f4f\" mod_revision:530 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-pause-178300.17a487a4173b4f4f\" value_size:747 lease:1116202938423774855 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-apiserver-pause-178300.17a487a4173b4f4f\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-27T00:08:47.051696Z","caller":"traceutil/trace.go:171","msg":"trace[16200084] linearizableReadLoop","detail":"{readStateIndex:631; appliedIndex:630; }","duration":"275.766636ms","start":"2023-12-27T00:08:46.775904Z","end":"2023-12-27T00:08:47.051671Z","steps":["trace[16200084] 'read index received'  (duration: 140.18461ms)","trace[16200084] 'applied index is now lower than readState.Index'  (duration: 135.580926ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-27T00:08:47.051764Z","caller":"traceutil/trace.go:171","msg":"trace[184720957] transaction","detail":"{read_only:false; response_revision:549; number_of_response:1; }","duration":"277.231432ms","start":"2023-12-27T00:08:46.774524Z","end":"2023-12-27T00:08:47.051755Z","steps":["trace[184720957] 'process raft request'  (duration: 141.633105ms)","trace[184720957] 'compare'  (duration: 134.58153ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-27T00:08:47.052065Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"276.290235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:public-info-viewer\" ","response":"range_response_count:1 size:783"}
	{"level":"info","ts":"2023-12-27T00:08:47.052122Z","caller":"traceutil/trace.go:171","msg":"trace[1921880010] range","detail":"{range_begin:/registry/clusterrolebindings/system:public-info-viewer; range_end:; response_count:1; response_revision:549; }","duration":"276.349535ms","start":"2023-12-27T00:08:46.775765Z","end":"2023-12-27T00:08:47.052114Z","steps":["trace[1921880010] 'agreement among raft nodes before linearized reading'  (duration: 276.231535ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-27T00:08:47.052233Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.826543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-27T00:08:47.05232Z","caller":"traceutil/trace.go:171","msg":"trace[393002975] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:549; }","duration":"165.913742ms","start":"2023-12-27T00:08:46.886399Z","end":"2023-12-27T00:08:47.052313Z","steps":["trace[393002975] 'agreement among raft nodes before linearized reading'  (duration: 165.813643ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-27T00:08:47.444594Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-12-27T00:08:47.444658Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-178300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.21.179.115:2380"],"advertise-client-urls":["https://172.21.179.115:2379"]}
	{"level":"warn","ts":"2023-12-27T00:08:47.444785Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-27T00:08:47.444981Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	WARNING: 2023/12/27 00:08:47 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2023-12-27T00:08:47.445508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"384.462861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:kube-dns\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2023-12-27T00:08:47.445547Z","caller":"traceutil/trace.go:171","msg":"trace[23265992] range","detail":"{range_begin:/registry/clusterrolebindings/system:kube-dns; range_end:; }","duration":"384.582361ms","start":"2023-12-27T00:08:47.060953Z","end":"2023-12-27T00:08:47.445535Z","steps":["trace[23265992] 'agreement among raft nodes before linearized reading'  (duration: 384.461561ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-27T00:08:47.445574Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-27T00:08:47.060945Z","time spent":"384.618161ms","remote":"127.0.0.1:48306","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":0,"response size":0,"request content":"key:\"/registry/clusterrolebindings/system:kube-dns\" "}
	WARNING: 2023/12/27 00:08:47 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2023-12-27T00:08:47.445776Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-27T00:08:47.059879Z","time spent":"385.889957ms","remote":"127.0.0.1:48242","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	WARNING: 2023/12/27 00:08:47 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2023-12-27T00:08:47.467311Z","caller":"traceutil/trace.go:171","msg":"trace[2032799466] linearizableReadLoop","detail":"{readStateIndex:632; appliedIndex:631; }","duration":"406.272902ms","start":"2023-12-27T00:08:47.061021Z","end":"2023-12-27T00:08:47.467294Z","steps":["trace[2032799466] 'read index received'  (duration: 309.285064ms)","trace[2032799466] 'applied index is now lower than readState.Index'  (duration: 96.986538ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-27T00:08:47.514604Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 172.21.179.115:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-27T00:08:47.514663Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 172.21.179.115:2379: use of closed network connection"}
	{"level":"info","ts":"2023-12-27T00:08:47.51473Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"117d4d98b7f20f7d","current-leader-member-id":"117d4d98b7f20f7d"}
	{"level":"info","ts":"2023-12-27T00:08:48.117283Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"172.21.179.115:2380"}
	{"level":"info","ts":"2023-12-27T00:08:48.117566Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"172.21.179.115:2380"}
	{"level":"info","ts":"2023-12-27T00:08:48.117597Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-178300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.21.179.115:2380"],"advertise-client-urls":["https://172.21.179.115:2379"]}
	
	
	==> etcd [3e3ed1fc6654] <==
	{"level":"info","ts":"2023-12-27T00:09:15.785795Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-27T00:09:15.805187Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-27T00:09:15.806242Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.21.179.115:2379"}
	{"level":"info","ts":"2023-12-27T00:09:15.816698Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-27T00:09:15.816767Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-27T00:10:30.153852Z","caller":"traceutil/trace.go:171","msg":"trace[1134498715] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"131.653705ms","start":"2023-12-27T00:10:30.022177Z","end":"2023-12-27T00:10:30.15383Z","steps":["trace[1134498715] 'process raft request'  (duration: 131.131704ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-27T00:10:30.460729Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.965294ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1116202938434187472 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:0f7d8ca89a9df0cf>","response":"size:40"}
	{"level":"info","ts":"2023-12-27T00:10:30.460856Z","caller":"traceutil/trace.go:171","msg":"trace[244007600] linearizableReadLoop","detail":"{readStateIndex:754; appliedIndex:753; }","duration":"128.904197ms","start":"2023-12-27T00:10:30.33194Z","end":"2023-12-27T00:10:30.460844Z","steps":["trace[244007600] 'read index received'  (duration: 132.5µs)","trace[244007600] 'applied index is now lower than readState.Index'  (duration: 128.769997ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-27T00:10:30.461058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.075897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-27T00:10:30.46111Z","caller":"traceutil/trace.go:171","msg":"trace[1354997969] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:641; }","duration":"129.216698ms","start":"2023-12-27T00:10:30.331883Z","end":"2023-12-27T00:10:30.4611Z","steps":["trace[1354997969] 'agreement among raft nodes before linearized reading'  (duration: 129.030497ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-27T00:10:30.461395Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-27T00:10:30.15522Z","time spent":"306.171343ms","remote":"127.0.0.1:48942","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2023-12-27T00:10:31.063563Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.392057ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1116202938434187474 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.21.179.115\" mod_revision:639 > success:<request_put:<key:\"/registry/masterleases/172.21.179.115\" value_size:67 lease:1116202938434187471 >> failure:<request_range:<key:\"/registry/masterleases/172.21.179.115\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-27T00:10:31.06379Z","caller":"traceutil/trace.go:171","msg":"trace[352950399] linearizableReadLoop","detail":"{readStateIndex:756; appliedIndex:754; }","duration":"258.514191ms","start":"2023-12-27T00:10:30.805223Z","end":"2023-12-27T00:10:31.063738Z","steps":["trace[352950399] 'read index received'  (duration: 10.007531ms)","trace[352950399] 'applied index is now lower than readState.Index'  (duration: 248.50606ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-27T00:10:31.063871Z","caller":"traceutil/trace.go:171","msg":"trace[641352185] transaction","detail":"{read_only:false; response_revision:642; number_of_response:1; }","duration":"600.933947ms","start":"2023-12-27T00:10:30.462929Z","end":"2023-12-27T00:10:31.063863Z","steps":["trace[641352185] 'process raft request'  (duration: 352.291886ms)","trace[641352185] 'compare'  (duration: 246.724255ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-27T00:10:31.063918Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-27T00:10:30.462912Z","time spent":"600.978347ms","remote":"127.0.0.1:48942","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.21.179.115\" mod_revision:639 > success:<request_put:<key:\"/registry/masterleases/172.21.179.115\" value_size:67 lease:1116202938434187471 >> failure:<request_range:<key:\"/registry/masterleases/172.21.179.115\" > >"}
	{"level":"info","ts":"2023-12-27T00:10:31.064563Z","caller":"traceutil/trace.go:171","msg":"trace[240372751] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"465.421629ms","start":"2023-12-27T00:10:30.599132Z","end":"2023-12-27T00:10:31.064553Z","steps":["trace[240372751] 'process raft request'  (duration: 464.525226ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-27T00:10:31.064613Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-27T00:10:30.599108Z","time spent":"465.477229ms","remote":"127.0.0.1:49000","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-3z7v6hoj6ykz5nx674fx6ypx3q\" mod_revision:640 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-3z7v6hoj6ykz5nx674fx6ypx3q\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-3z7v6hoj6ykz5nx674fx6ypx3q\" > >"}
	{"level":"warn","ts":"2023-12-27T00:10:31.064733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.551194ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-27T00:10:31.064758Z","caller":"traceutil/trace.go:171","msg":"trace[905290283] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; response_count:0; response_revision:643; }","duration":"259.577995ms","start":"2023-12-27T00:10:30.805173Z","end":"2023-12-27T00:10:31.064751Z","steps":["trace[905290283] 'agreement among raft nodes before linearized reading'  (duration: 259.533194ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-27T00:10:31.064872Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.288509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-12-27T00:10:31.0649Z","caller":"traceutil/trace.go:171","msg":"trace[832273806] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:0; response_revision:643; }","duration":"134.315509ms","start":"2023-12-27T00:10:30.930576Z","end":"2023-12-27T00:10:31.064891Z","steps":["trace[832273806] 'agreement among raft nodes before linearized reading'  (duration: 134.274009ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-27T00:10:31.065559Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.509014ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-27T00:10:31.065593Z","caller":"traceutil/trace.go:171","msg":"trace[597470310] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:643; }","duration":"103.544414ms","start":"2023-12-27T00:10:30.962041Z","end":"2023-12-27T00:10:31.065585Z","steps":["trace[597470310] 'agreement among raft nodes before linearized reading'  (duration: 103.446614ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-27T00:10:35.564717Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.060219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-27T00:10:35.564826Z","caller":"traceutil/trace.go:171","msg":"trace[1764892199] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; response_count:0; response_revision:643; }","duration":"299.174419ms","start":"2023-12-27T00:10:35.265623Z","end":"2023-12-27T00:10:35.564797Z","steps":["trace[1764892199] 'count revisions from in-memory index tree'  (duration: 298.781317ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:10:37 up 10 min,  0 users,  load average: 0.81, 0.87, 0.44
	Linux pause-178300 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [98ec605b2087] <==
	I1227 00:09:18.704809       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1227 00:09:18.704825       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1227 00:09:18.706130       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1227 00:09:18.706786       1 aggregator.go:166] initial CRD sync complete...
	I1227 00:09:18.706826       1 autoregister_controller.go:141] Starting autoregister controller
	I1227 00:09:18.706833       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 00:09:18.706839       1 cache.go:39] Caches are synced for autoregister controller
	I1227 00:09:18.722874       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 00:09:18.723686       1 shared_informer.go:318] Caches are synced for configmaps
	I1227 00:09:18.754812       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1227 00:09:18.762847       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 00:09:19.568412       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1227 00:09:20.083643       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.21.179.115]
	I1227 00:09:20.086798       1 controller.go:624] quota admission added evaluator for: endpoints
	I1227 00:09:20.098452       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 00:09:20.733632       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1227 00:09:20.755078       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1227 00:09:20.835580       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1227 00:09:20.903976       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 00:09:20.917693       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 00:10:31.068072       1 trace.go:236] Trace[1340513472]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.21.179.115,type:*v1.Endpoints,resource:apiServerIPInfo (27-Dec-2023 00:10:30.067) (total time: 1000ms):
	Trace[1340513472]: ---"initial value restored" 86ms (00:10:30.154)
	Trace[1340513472]: ---"Transaction prepared" 308ms (00:10:30.462)
	Trace[1340513472]: ---"Txn call completed" 605ms (00:10:31.067)
	Trace[1340513472]: [1.000224576s] [1.000224576s] END
	
	
	==> kube-apiserver [9cfb1e684b43] <==
	I1227 00:08:52.079137       1 server.go:148] Version: v1.28.4
	I1227 00:08:52.079356       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1227 00:08:52.781941       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1227 00:08:52.783751       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	W1227 00:08:52.784804       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1227 00:08:52.796789       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1227 00:08:52.796862       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1227 00:08:52.797157       1 instance.go:298] Using reconciler: lease
	W1227 00:08:52.799222       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:53.783339       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:53.786111       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:53.799881       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:55.362405       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:55.525728       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:55.555413       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:57.580971       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:57.607010       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:08:58.361504       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:09:00.946711       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:09:01.881909       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:09:01.907691       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:09:07.671612       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:09:07.730007       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 00:09:08.375862       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1227 00:09:12.799102       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [d66a1426a5db] <==
	I1227 00:08:34.466739       1 serving.go:348] Generated self-signed cert in-memory
	I1227 00:08:35.368504       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I1227 00:08:35.368555       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 00:08:35.370606       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1227 00:08:35.370883       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1227 00:08:35.373887       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1227 00:08:35.374587       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [fbb65e0aaa07] <==
	I1227 00:09:31.054105       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1227 00:09:31.054203       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1227 00:09:31.054595       1 shared_informer.go:318] Caches are synced for persistent volume
	I1227 00:09:31.054925       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1227 00:09:31.058014       1 shared_informer.go:318] Caches are synced for GC
	I1227 00:09:31.063132       1 shared_informer.go:318] Caches are synced for service account
	I1227 00:09:31.063449       1 shared_informer.go:318] Caches are synced for taint
	I1227 00:09:31.064392       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1227 00:09:31.064745       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-178300"
	I1227 00:09:31.064933       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 00:09:31.065037       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1227 00:09:31.065183       1 taint_manager.go:210] "Sending events to api server"
	I1227 00:09:31.068331       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1227 00:09:31.068858       1 event.go:307] "Event occurred" object="pause-178300" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-178300 event: Registered Node pause-178300 in Controller"
	I1227 00:09:31.084370       1 shared_informer.go:318] Caches are synced for HPA
	I1227 00:09:31.110301       1 shared_informer.go:318] Caches are synced for disruption
	I1227 00:09:31.112078       1 shared_informer.go:318] Caches are synced for stateful set
	I1227 00:09:31.122109       1 shared_informer.go:318] Caches are synced for endpoint
	I1227 00:09:31.152810       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 00:09:31.181165       1 shared_informer.go:318] Caches are synced for cronjob
	I1227 00:09:31.188662       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1227 00:09:31.197502       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 00:09:31.598438       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 00:09:31.598495       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1227 00:09:31.617678       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [a9088dec4fb6] <==
	I1227 00:08:35.447598       1 server_others.go:69] "Using iptables proxy"
	I1227 00:08:38.286089       1 node.go:141] Successfully retrieved node IP: 172.21.179.115
	I1227 00:08:38.360797       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1227 00:08:38.360907       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1227 00:08:38.365523       1 server_others.go:152] "Using iptables Proxier"
	I1227 00:08:38.365742       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 00:08:38.366213       1 server.go:846] "Version info" version="v1.28.4"
	I1227 00:08:38.366480       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 00:08:38.368906       1 config.go:188] "Starting service config controller"
	I1227 00:08:38.369141       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 00:08:38.369470       1 config.go:97] "Starting endpoint slice config controller"
	I1227 00:08:38.369655       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 00:08:38.373171       1 config.go:315] "Starting node config controller"
	I1227 00:08:38.373449       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 00:08:38.470238       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1227 00:08:38.470398       1 shared_informer.go:318] Caches are synced for service config
	I1227 00:08:38.473889       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [f5b7bbc89d6a] <==
	I1227 00:09:19.648644       1 server_others.go:69] "Using iptables proxy"
	I1227 00:09:19.676503       1 node.go:141] Successfully retrieved node IP: 172.21.179.115
	I1227 00:09:19.732610       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1227 00:09:19.732654       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1227 00:09:19.739665       1 server_others.go:152] "Using iptables Proxier"
	I1227 00:09:19.741095       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 00:09:19.742050       1 server.go:846] "Version info" version="v1.28.4"
	I1227 00:09:19.742816       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 00:09:19.745207       1 config.go:188] "Starting service config controller"
	I1227 00:09:19.745595       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 00:09:19.746032       1 config.go:97] "Starting endpoint slice config controller"
	I1227 00:09:19.746440       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 00:09:19.747701       1 config.go:315] "Starting node config controller"
	I1227 00:09:19.747971       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 00:09:19.846341       1 shared_informer.go:318] Caches are synced for service config
	I1227 00:09:19.846743       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1227 00:09:19.849501       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [15fd97aaef6c] <==
	I1227 00:09:15.718162       1 serving.go:348] Generated self-signed cert in-memory
	W1227 00:09:18.637532       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 00:09:18.637729       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 00:09:18.637951       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 00:09:18.638226       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 00:09:18.726033       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1227 00:09:18.726176       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 00:09:18.729127       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 00:09:18.729352       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1227 00:09:18.731160       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1227 00:09:18.731445       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1227 00:09:18.830388       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8171cfbf5ab2] <==
	I1227 00:08:35.068380       1 serving.go:348] Generated self-signed cert in-memory
	W1227 00:08:37.605034       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 00:08:37.605356       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 00:08:37.605572       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 00:08:37.605820       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 00:08:37.798139       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1227 00:08:37.798373       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 00:08:37.808097       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1227 00:08:37.808209       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 00:08:37.811373       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1227 00:08:37.808763       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1227 00:08:37.912882       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1227 00:08:47.507745       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1227 00:08:47.508038       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1227 00:08:47.508243       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1227 00:08:47.508674       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	-- Journal begins at Wed 2023-12-27 00:00:28 UTC, ends at Wed 2023-12-27 00:10:37 UTC. --
	Dec 27 00:09:13 pause-178300 kubelet[9575]: I1227 00:09:13.966292    9575 scope.go:117] "RemoveContainer" containerID="d66a1426a5dbaa737c5a523391faadc567726fef632d95be44c9fb797bb7b09a"
	Dec 27 00:09:13 pause-178300 kubelet[9575]: I1227 00:09:13.982779    9575 scope.go:117] "RemoveContainer" containerID="8171cfbf5ab26d996daafe94ebbdd7bbf5ba920b356a5e7eb7060b1f0f1a6fbc"
	Dec 27 00:09:14 pause-178300 kubelet[9575]: I1227 00:09:14.065776    9575 kubelet_node_status.go:70] "Attempting to register node" node="pause-178300"
	Dec 27 00:09:14 pause-178300 kubelet[9575]: E1227 00:09:14.066598    9575 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.21.179.115:8443: connect: connection refused" node="pause-178300"
	Dec 27 00:09:14 pause-178300 kubelet[9575]: E1227 00:09:14.211150    9575 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-178300?timeout=10s\": dial tcp 172.21.179.115:8443: connect: connection refused" interval="800ms"
	Dec 27 00:09:14 pause-178300 kubelet[9575]: I1227 00:09:14.663512    9575 scope.go:117] "RemoveContainer" containerID="9cfb1e684b435836d36064eb2e1d7241b91b4fd4867cb08229666fa581d95f58"
	Dec 27 00:09:15 pause-178300 kubelet[9575]: E1227 00:09:15.011921    9575 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-178300?timeout=10s\": dial tcp 172.21.179.115:8443: connect: connection refused" interval="1.6s"
	Dec 27 00:09:15 pause-178300 kubelet[9575]: E1227 00:09:15.229148    9575 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"pause-178300\" not found"
	Dec 27 00:09:15 pause-178300 kubelet[9575]: I1227 00:09:15.684852    9575 kubelet_node_status.go:70] "Attempting to register node" node="pause-178300"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.801007    9575 kubelet_node_status.go:108] "Node was previously registered" node="pause-178300"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.801191    9575 kubelet_node_status.go:73] "Successfully registered node" node="pause-178300"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.804598    9575 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.806814    9575 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.875829    9575 apiserver.go:52] "Watching apiserver"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.881000    9575 topology_manager.go:215] "Topology Admit Handler" podUID="e29bd3e6-d025-4c44-abb4-5f07e243d1d8" podNamespace="kube-system" podName="kube-proxy-7qklg"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.882111    9575 topology_manager.go:215] "Topology Admit Handler" podUID="ba95ef6b-0fe8-41bb-9e19-6ffc57a37b95" podNamespace="kube-system" podName="coredns-5dd5756b68-68vdw"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.929212    9575 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.963978    9575 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e29bd3e6-d025-4c44-abb4-5f07e243d1d8-xtables-lock\") pod \"kube-proxy-7qklg\" (UID: \"e29bd3e6-d025-4c44-abb4-5f07e243d1d8\") " pod="kube-system/kube-proxy-7qklg"
	Dec 27 00:09:18 pause-178300 kubelet[9575]: I1227 00:09:18.964109    9575 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e29bd3e6-d025-4c44-abb4-5f07e243d1d8-lib-modules\") pod \"kube-proxy-7qklg\" (UID: \"e29bd3e6-d025-4c44-abb4-5f07e243d1d8\") " pod="kube-system/kube-proxy-7qklg"
	Dec 27 00:09:19 pause-178300 kubelet[9575]: I1227 00:09:19.183297    9575 scope.go:117] "RemoveContainer" containerID="8b6ae4916189a2b9931b38b3bd8e9ba4f8334c7de222b1b6b16dabc009a02703"
	Dec 27 00:09:19 pause-178300 kubelet[9575]: I1227 00:09:19.183716    9575 scope.go:117] "RemoveContainer" containerID="a9088dec4fb61685e8eca1089cd87d8a8766cfc289d89c602cb6674289394b64"
	Dec 27 00:09:55 pause-178300 kubelet[9575]: E1227 00:09:55.053962    9575 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 27 00:09:55 pause-178300 kubelet[9575]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 27 00:09:55 pause-178300 kubelet[9575]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 27 00:09:55 pause-178300 kubelet[9575]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1227 00:10:26.981589    7932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-178300 -n pause-178300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-178300 -n pause-178300: (12.8773329s)
helpers_test.go:261: (dbg) Run:  kubectl --context pause-178300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (482.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10806.196s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-674100 --alsologtostderr -v=3
E1227 00:46:01.369448   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-344500\client.crt: The system cannot find the path specified.
E1227 00:46:03.942588   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-344500\client.crt: The system cannot find the path specified.
panic: test timed out after 3h0m0s
running tests:
	TestNetworkPlugins (1h0m37s)
	TestNetworkPlugins/group (1h0m37s)
	TestStartStop (52m18s)
	TestStartStop/group (52m18s)
	TestStartStop/group/default-k8s-diff-port (2m41s)
	TestStartStop/group/default-k8s-diff-port/serial (2m41s)
	TestStartStop/group/default-k8s-diff-port/serial/FirstStart (2m41s)
	TestStartStop/group/embed-certs (5m18s)
	TestStartStop/group/embed-certs/serial (5m18s)
	TestStartStop/group/embed-certs/serial/Stop (4s)
	TestStartStop/group/no-preload (7m39s)
	TestStartStop/group/no-preload/serial (7m39s)
	TestStartStop/group/no-preload/serial/SecondStart (1m58s)
	TestStartStop/group/old-k8s-version (11m20s)
	TestStartStop/group/old-k8s-version/serial (11m20s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (35s)

                                                
                                                
goroutine 3563 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2259 +0x3b9
created by time.goFunc
	/usr/local/go/src/time/sleep.go:176 +0x2d

                                                
                                                
goroutine 1 [chan receive, 40 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc000605040, 0xc000863b80)
	/usr/local/go/src/testing/testing.go:1601 +0x138
testing.runTests(0xc0007b3720?, {0x48a5cc0, 0x2a, 0x2a}, {0xc000863be8?, 0x6abfa5?, 0x48c7920?})
	/usr/local/go/src/testing/testing.go:2052 +0x445
testing.(*M).Run(0xc0007b3720)
	/usr/local/go/src/testing/testing.go:1925 +0x636
k8s.io/minikube/test/integration.TestMain(0xc00009bef0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x88
main.main()
	_testmain.go:131 +0x1c6

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000069900)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 962 [chan send, 153 minutes]:
os/exec.(*Cmd).watchCtx(0xc002730000, 0xc00257b0e0)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 945
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 3224 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35c8018, 0xc000106180}, 0xc002191f50, 0xc00211e0b8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x35c8018, 0xc000106180}, 0x1?, 0x1?, 0xc002191fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35c8018?, 0xc000106180?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002191fd0?, 0x77df87?, 0xc000a32a80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3216
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2447 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc00075d090, 0x16)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x35a1380?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00255aae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00075d0c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002d27f20?, {0x35a5740, 0xc0022fc210}, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0x63821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc002d27fd0?, 0xb7bd45?, 0xc000a32780?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2474
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 24 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1157 +0x111
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 23
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1153 +0x171

                                                
                                                
goroutine 1495 [chan receive, 135 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002e2e700, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1493
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2474 [chan receive, 29 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00075d0c0, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2472
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2239 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35c8018, 0xc000106180}, 0xc0021f3f50, 0x2701ca2?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x35c8018, 0xc000106180}, 0x7?, 0x26f7594?, 0x2?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35c8018?, 0xc000106180?}, 0xc0020f0ea0?, 0x739180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x73a045?, 0xc0020f0ea0?, 0xc000c8c120?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2276
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3453 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc002437ce0, 0xc00257b8c0)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3450
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 153 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0007012c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 142
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2301 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002755860)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2300
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2129 [chan receive, 53 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc000104820, 0x31524d0)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1958
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 168 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 167
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 167 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35c8018, 0xc000106180}, 0xc0021d5f50, 0xc0006bae38?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x35c8018, 0xc000106180}, 0x1?, 0x1?, 0xc0021d5fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35c8018?, 0xc000106180?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0021d5fd0?, 0x77df87?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 154
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 166 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0021ea190, 0x3c)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x35a1380?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0007011a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0021ea1c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x35a5740, 0xc00090a420}, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0000548a0?, 0x3b9aca00, 0x0, 0xd0?, 0x63821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x77df25?, 0xc0008434a0?, 0xc0000549c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 154
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2275 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002d333e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2270
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 154 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0021ea1c0, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 142
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 3452 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x48f9380?, {0xc000477c28?, 0x0?, 0x3f918d0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc000477c80?, 0x60e656?, 0x4922b40?, 0xc000477ce8?, 0x6013bd?, 0x235b1be0598?, 0xc000482e87?, 0xc000477ce0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc00256063e?, 0x39c2, 0x6a7f7f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc00244d400?, {0xc00256063e?, 0x0?, 0xc00255c000?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc00244d400, {0xc00256063e, 0x39c2, 0x39c2})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00090f0f0, {0xc00256063e?, 0xc000477e68?, 0xc000477e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000893470, {0x35a43e0, 0xc00090f0f0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x35a4460, 0xc000893470}, {0x35a43e0, 0xc00090f0f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc000d68600?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3450
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 1482 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc002e2e650, 0x31)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x35a1380?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0023287e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002e2e700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002109f90?, {0x35a5740, 0xc002b88030}, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0027667e0?, 0x3b9aca00, 0x0, 0xd0?, 0x63821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x77df25?, 0xc000d11080?, 0xc000107020?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1495
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3405 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x7ffffc9c4de0?, {0xc00233bab0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0xa?, 0xc0021cdbc8?, 0xc0021cdab8?, 0xc0021cdbe8?, 0x100c0021cdbb0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc00090eb68?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc002dde810)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0002886e0)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc002420680?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc002420680, 0xc0002886e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.validateFirstStart({0x35c7e58?, 0xc00016a930?}, 0xc002420680, {0xc002f02c40?, 0x66806d?}, {0x658b732b?, 0xc02cd87990?}, {0xb191ecaa930?, 0xc0021cdf60?}, {0xc00059ed00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:186 +0xc5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00211f9e0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc002420680, 0xc000069000)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 3404
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2238 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc00071b550, 0x17)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x35a1380?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002d332c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00071b580)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0020f9f88?, {0x35a5740, 0xc000c8c150}, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00279e240?, 0x3b9aca00, 0x0, 0xd0?, 0x63821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x77df25?, 0xc000288420?, 0xc0009d9680?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2276
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3550 [select]:
os/exec.(*Cmd).watchCtx(0xc000842dc0, 0xc00279ef00)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3547
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 2878 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2877
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 1958 [chan receive, 53 minutes]:
testing.(*T).Run(0xc00219da00, {0x26f9497?, 0x7b640cba25c?}, 0x31524d0)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop(0xc00219d860?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00219da00, 0x31522f8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1484 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1483
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 3451 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x6409d0?, {0xc002649c28?, 0x2?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x5000000040000?, 0x0?, 0x3?, 0xc002649ce8?, 0x601265?, 0xffffffffffffffff?, 0x5?, 0x10?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc0020fd5ea?, 0x216, 0x6a7f7f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc00244cf00?, {0xc0020fd5ea?, 0x0?, 0xc0020fd400?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc00244cf00, {0xc0020fd5ea, 0x216, 0x216})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00090f080, {0xc0020fd5ea?, 0xc002d33800?, 0xc002649e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000893440, {0x35a43e0, 0xc00090f080})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x35a4460, 0xc000893440}, {0x35a43e0, 0xc00090f080}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc00257b5c0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3450
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 1494 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002328900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 1493
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 1172 [chan send, 147 minutes]:
os/exec.(*Cmd).watchCtx(0xc00074a9a0, 0xc002a52900)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 868
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 3407 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc00264fc28?, 0x0?, 0x3f2ac28?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc00264fc80?, 0x60e656?, 0x4922b40?, 0xc00264fce8?, 0x6013bd?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc00212c01d?, 0x3fe3, 0x6a7f7f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc00079b680?, {0xc00212c01d?, 0x0?, 0xc002120000?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc00079b680, {0xc00212c01d, 0x3fe3, 0x3fe3})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00090ebd8, {0xc00212c01d?, 0x1850?, 0x1850?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0028e7650, {0x35a43e0, 0xc00090ebd8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x35a4460, 0xc0028e7650}, {0x35a43e0, 0xc00090ebd8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc002766540?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3405
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 2334 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2333
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2877 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35c8018, 0xc000106180}, 0xc002997f50, 0xc0004e0298?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x35c8018, 0xc000106180}, 0x1?, 0x1?, 0xc002997fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35c8018?, 0xc000106180?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002997fd0?, 0x77df87?, 0xc0009d8060?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2884
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3450 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x7ffffc9c4de0?, {0xc00215ba80?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0xa?, 0xc00215bb98?, 0xc00215ba88?, 0xc00215bbb8?, 0x100c00215bb80?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc00090f060?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc0003fca20)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002437ce0)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc002420820?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc002420820, 0xc002437ce0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.validateSecondStart({0x35c7e58, 0xc000956c40}, 0xc002420820, {0xc0004dda70, 0x11}, {0x658b7356?, 0xc02940ee70?}, {0xb231e334edc?, 0xc0024fff60?}, {0xc00059fc00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x30203d206e6f696c?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc002420820, 0xc000884300)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 3327
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 3377 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35c8018, 0xc000106180}, 0xc00264df50, 0xc00264df48?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x35c8018, 0xc000106180}, 0x65?, 0x0?, 0xc00264dfd0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35c8018?, 0xc000106180?}, 0xc0005a2600?, 0xc0001f3c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xb75f40?, 0xc0005a2718?, 0xc00264dfb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3356
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2876 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0021ea750, 0x13)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x35a1380?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0029556e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0021ea780)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0002b7f90?, {0x35a5740, 0xc0008457d0}, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0x63821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc0002b7fd0?, 0x77df87?, 0xc00279e000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2884
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3047 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002329080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3102
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 3435 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000d69810, 0x0)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x35a1380?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000701b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000d69840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005b3f90?, {0x35a5740, 0xc0028e76e0}, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0x63821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc0005b3fd0?, 0x77df87?, 0xc000054e40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3461
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 755 [IO wait, 163 minutes]:
internal/poll.runtime_pollWait(0x235f74ed288, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0x0?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc00046bb98, 0xc002b5dbb8)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc00046bb80, 0x3f4, {0xc000710ff0?, 0xc000101400?, 0x3152d70?}, 0xc002b5dcc8?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc00046bb80, 0xc002b5dd90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc00046bb80)
	/usr/local/go/src/net/fd_windows.go:166 +0x54
net.(*TCPListener).accept(0xc00012cea0)
	/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e
net.(*TCPListener).Accept(0xc00012cea0)
	/usr/local/go/src/net/tcpsock.go:315 +0x30
net/http.(*Server).Serve(0xc00061c1e0, {0x35bba00, 0xc00012cea0})
	/usr/local/go/src/net/http/server.go:3056 +0x364
net/http.(*Server).ListenAndServe(0xc00061c1e0)
	/usr/local/go/src/net/http/server.go:2985 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc000685520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 720
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2211 +0x13a

                                                
                                                
goroutine 930 [chan receive, 153 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0006e41c0, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 877
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 3356 [chan receive, 3 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0021eafc0, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3372
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 3437 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3436
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 849 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0021311a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 877
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2958 [chan receive, 17 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0021ea900, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2956
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 3547 [syscall, locked to thread]:
syscall.SyscallN(0x7ffffc9c4de0?, {0xc00277ba60?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x6?, 0xc00277bb78?, 0xc00277ba68?, 0xc00277bb98?, 0x100c00277bb60?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc00090ec78?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc00256e090)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000842dc0)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc002551ba0?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc002551ba0, 0xc000842dc0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.validateStop({0x35c7e58?, 0xc00090c000?}, 0xc002551ba0, {0xc000d26000?, 0x66806d?}, {0x658b73c8?, 0xc0260fc320?}, {0xb3da5f0c9ac?, 0xc0025c3f60?}, {0xc002496000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:228 +0x15d
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x550000005c?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc002551ba0, 0xc000729b80)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 3337
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2302 [chan receive, 33 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000d68e80, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2300
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2448 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35c8018, 0xc000106180}, 0xc0022c1f50, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x35c8018, 0xc000106180}, 0x50?, 0xb712e5?, 0xc0022c1ec0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35c8018?, 0xc000106180?}, 0xc00048c690?, 0xc002a549c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0022c1fd0?, 0xb6a045?, 0xc000a32780?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2474
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2720 [chan receive, 23 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00075d3c0, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2715
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 3408 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc0002886e0, 0xc00257aba0)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3405
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 3460 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000701c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3424
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2167 [chan receive, 5 minutes]:
testing.(*T).Run(0xc002161040, {0x26fa971?, 0x0?}, 0xc00290c180)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002161040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xaf8
testing.tRunner(0xc002161040, 0xc002e2e580)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2129
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1940 [chan receive, 61 minutes]:
testing.(*T).Run(0xc00219c1a0, {0x26f9497?, 0x66806d?}, 0xc000576408)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00219c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00219c1a0, 0x31522b0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 3406 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x48f5f00?, {0xc00269dc28?, 0x2?, 0x3f918d0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc00269dc80?, 0x60e656?, 0xc002d1b1e0?, 0xc00269dce8?, 0x601265?, 0x6385dc?, 0xc002d1b1e0?, 0xc00269dce0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc00262124c?, 0x5b4, 0x6a7f7f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc00079b180?, {0xc00262124c?, 0x0?, 0xc002621000?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc00079b180, {0xc00262124c, 0x5b4, 0x5b4})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00090eb88, {0xc00262124c?, 0xc00269de68?, 0xc00269de68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0028e7620, {0x35a43e0, 0xc00090eb88})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x35a4460, 0xc0028e7620}, {0x35a43e0, 0xc00090eb88}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3405
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 919 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0006e4190, 0x36)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x35a1380?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002131080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0006e41c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x35a5740, 0xc00090bb00}, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 930
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3111 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3110
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2754 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35c8018, 0xc000106180}, 0xc00234ff50, 0x5ad?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x35c8018, 0xc000106180}, 0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35c8018?, 0xc000106180?}, 0x0?, 0x739180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x77df25?, 0xc000842dc0?, 0xc00257a240?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2720
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3355 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00211e720)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3372
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 1969 [chan receive, 3 minutes]:
testing.(*testContext).waitParallel(0xc0006a70e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1571 +0x53c
testing.tRunner(0xc002d1ba00, 0xc000576408)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1940
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2165 [chan receive, 9 minutes]:
testing.(*T).Run(0xc0021609c0, {0x26fa971?, 0x0?}, 0xc002ddcd00)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0021609c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xaf8
testing.tRunner(0xc0021609c0, 0xc002e2e4c0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2129
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 3436 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35c8018, 0xc000106180}, 0xc0021cdf50, 0xc0021cdf48?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x35c8018, 0xc000106180}, 0x65?, 0xc02cd87990?, 0xc0021cdfd0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35c8018?, 0xc000106180?}, 0xc0005a2900?, 0x739180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xb75f40?, 0xc0005a2a18?, 0xc0021cdfb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3461
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2449 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2448
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 1483 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35c8018, 0xc000106180}, 0xc002313f50, 0xc0020ea478?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x35c8018, 0xc000106180}, 0x1?, 0x1?, 0xc002313fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35c8018?, 0xc000106180?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002313fd0?, 0x77df87?, 0xc002766480?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1495
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2689 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc00075d390, 0x14)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x35a1380?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0023290e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00075d3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0002b1f88?, {0x35a5740, 0xc0022d24b0}, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0x63821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc0002b1fd0?, 0x77df87?, 0xc0009d8d20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2720
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3202 [chan receive]:
testing.(*T).Run(0xc0025501a0, {0x2705f98?, 0xc002b5be00?}, 0xc000154c80)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0025501a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x25e
testing.tRunner(0xc0025501a0, 0xc002ddc000)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2162
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 921 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 920
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 920 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35c8018, 0xc000106180}, 0xc0024f9f50, 0xc0021307d8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x35c8018, 0xc000106180}, 0x1?, 0x1?, 0xc0024f9fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35c8018?, 0xc000106180?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0024f9fd0?, 0x77df87?, 0xc0009d8f00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 930
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2164 [chan receive, 3 minutes]:
testing.(*T).Run(0xc002160680, {0x26fa971?, 0x0?}, 0xc000068f80)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002160680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xaf8
testing.tRunner(0xc002160680, 0xc002e2e480)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2129
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2332 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000d68e50, 0x18)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x35a1380?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002755740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000d68e80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00297bf88?, {0x35a5740, 0xc003195890}, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00279e2a0?, 0x3b9aca00, 0x0, 0xd0?, 0x63821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x77df25?, 0xc000288580?, 0xc00279e3c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2302
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2163 [chan receive, 53 minutes]:
testing.(*testContext).waitParallel(0xc0006a70e0)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002160340)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002160340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002160340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc002160340, 0xc002e2e440)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2129
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2162 [chan receive, 11 minutes]:
testing.(*T).Run(0xc002160000, {0x26fa971?, 0x0?}, 0xc002ddc000)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc002160000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xaf8
testing.tRunner(0xc002160000, 0xc002e2e3c0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2129
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 3337 [chan receive]:
testing.(*T).Run(0xc00213cb60, {0x26f86b8?, 0xc002283e00?}, 0xc000729b80)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00213cb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x25e
testing.tRunner(0xc00213cb60, 0xc00290c180)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2167
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2615 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2614
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2883 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002955800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2872
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2240 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2239
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2602 [chan receive, 27 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002e2f600, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2593
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2473 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00255ac60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2472
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 3404 [chan receive, 3 minutes]:
testing.(*T).Run(0xc0024204e0, {0x2703ddd?, 0xc00269be00?}, 0xc000069000)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0024204e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x25e
testing.tRunner(0xc0024204e0, 0xc000068f80)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2164
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2276 [chan receive, 35 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00071b580, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2270
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2613 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc002e2f5d0, 0x16)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x35a1380?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002954fc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002e2f600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x486b430?, {0x35a5740, 0xc002ae68d0}, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0024f5fb0?, 0x3b9aca00, 0x0, 0xd0?, 0x63821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc000c7f140?, 0x2020202035373637?, 0xc002e43710?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2602
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2614 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35c8018, 0xc000106180}, 0xc002305f50, 0xc000055c20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x35c8018, 0xc000106180}, 0x0?, 0xc002305f58?, 0x727dbe?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35c8018?, 0xc000106180?}, 0xc002535860?, 0x739101?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0xc000055bc0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2602
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2884 [chan receive, 19 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0021ea780, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2872
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2333 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35c8018, 0xc000106180}, 0xc0021f1f50, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x35c8018, 0xc000106180}, 0x50?, 0xb712e5?, 0xc0021f1ec0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35c8018?, 0xc000106180?}, 0xc0004c9400?, 0xc000055c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0021f1fd0?, 0xb6a045?, 0xc0001f2d80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2302
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3461 [chan receive, 3 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000d69840, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3424
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 3502 [syscall, locked to thread]:
syscall.SyscallN(0x7ffffc9c4de0?, {0xc00085ba80?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0xd?, 0xc00085bb98?, 0xc00085ba88?, 0xc00085bbb8?, 0x100c00085bb80?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc000d06530?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc00254e300)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0026262c0)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc002420340?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc002420340, 0xc0026262c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.validateSecondStart({0x35c7e58, 0xc000440000}, 0xc002420340, {0xc000d26018, 0x16}, {0x658b73a9?, 0xc01af51bfc?}, {0xb3663177bb4?, 0xc0024fdf60?}, {0xc000186d80, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x66202265646f6d2d?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc002420340, 0xc000154c80)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 3202
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 3376 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0021eaf90, 0x0)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x35a1380?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00211e600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0021eafc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002935f90?, {0x35a5740, 0xc002968ed0}, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0x63821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc002935fd0?, 0x77df87?, 0xc0001f2f00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3356
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2601 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0029551a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2593
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2755 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2754
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2981 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35c8018, 0xc000106180}, 0xc0024f7f50, 0xb6ae31?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x35c8018, 0xc000106180}, 0xb8?, 0xb6b095?, 0xc0005a2600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35c8018?, 0xc000106180?}, 0x0?, 0x739180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0024f7fd0?, 0xb7bd45?, 0xc0005a2600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2958
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3225 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3224
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2719 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002329200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2715
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 3410 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3377
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2980 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0021ea8d0, 0x13)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x35a1380?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002955980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0021ea900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00297bf48?, {0x35a5740, 0xc002e184b0}, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000054ea0?, 0x3b9aca00, 0x0, 0xd0?, 0x63821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x77df25?, 0xc0024a26e0?, 0xc000054fc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2958
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2957 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002955aa0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2956
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 3327 [chan receive, 3 minutes]:
testing.(*T).Run(0xc002550d00, {0x2705f98?, 0xc002187e00?}, 0xc000884300)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc002550d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x25e
testing.tRunner(0xc002550d00, 0xc002ddcd00)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 2165
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 3109 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000d69010, 0x11)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x35a1380?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002328f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000d69040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x35a5740, 0xc0022d2900}, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3048
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2982 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2981
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 3110 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35c8018, 0xc000106180}, 0xc002347f50, 0xc002d32a18?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x35c8018, 0xc000106180}, 0x1?, 0x1?, 0xc002347fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35c8018?, 0xc000106180?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002347fd0?, 0x77df87?, 0xc000c8dfb0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3048
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3048 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000d69040, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3102
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 3543 [IO wait]:
internal/poll.runtime_pollWait(0x235f74ed190, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc5db7c458baf7827?, 0xc0023516b0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc000670798, 0x3152d98)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).Read(0xc000670780, {0xc0009ea600, 0x1300, 0x1300})
	/usr/local/go/src/internal/poll/fd_windows.go:436 +0x2b1
net.(*netFD).Read(0xc000670780, {0xc0009ea600?, 0xc0009ea605?, 0xf02?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc00090ea90, {0xc0009ea600?, 0x615de5?, 0xc0009976b8?})
	/usr/local/go/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc00219b4d0, {0xc0009ea600?, 0xc00219b4d0?, 0x0?})
	/usr/local/go/src/crypto/tls/conn.go:805 +0x3b
bytes.(*Buffer).ReadFrom(0xc0009977a8, {0x35a5ec0, 0xc00219b4d0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc000997500, {0x235f7f34898?, 0xc00090ea90}, 0x1300?)
	/usr/local/go/src/crypto/tls/conn.go:827 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc000997500, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:625 +0x250
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:587
crypto/tls.(*Conn).Read(0xc000997500, {0xc0025a7000, 0x1000, 0xb74ca5?})
	/usr/local/go/src/crypto/tls/conn.go:1369 +0x158
bufio.(*Reader).Read(0xc00211fb60, {0xc002536ac0, 0x9, 0x48681b0?})
	/usr/local/go/src/bufio/bufio.go:244 +0x197
io.ReadAtLeast({0x35a4500, 0xc00211fb60}, {0xc002536ac0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc002536ac0, 0x9, 0x2358400?}, {0x35a4500?, 0xc00211fb60?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc002536a80)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/frame.go:498 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc002351f98)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/transport.go:2275 +0x11f
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0023da300)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/transport.go:2170 +0x65
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 3542
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.19.0/http2/transport.go:821 +0xcbe

                                                
                                                
goroutine 3223 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc002e2f4d0, 0xc)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x35a1380?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00211f0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002e2f500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002509f90?, {0x35a5740, 0xc002b02180}, 0x1, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0x63821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc002509fd0?, 0x77df87?, 0xc0005a2600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3216
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3215 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00211f200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3221
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 3216 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002e2f500, 0xc000106180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3221
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 3548 [syscall, locked to thread]:
syscall.SyscallN(0x6409d0?, {0xc0023a1c28?, 0x2?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0023a1c80?, 0x100000000000000?, 0xc002551a00?, 0xc0023a1ce8?, 0x601265?, 0x6385dc?, 0xc002551a00?, 0xc0023a1ce0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc0024cfa00?, 0x200, 0x200?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc000671180?, {0xc0024cfa00?, 0x0?, 0xc0024cfa00?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc000671180, {0xc0024cfa00, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00090ec80, {0xc0024cfa00?, 0xc0023a1e68?, 0xc0023a1e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00266cff0, {0x35a43e0, 0xc00090ec80})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x35a4460, 0xc00266cff0}, {0x35a43e0, 0xc00090ec80}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc00279f5c0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3547
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 3549 [syscall, locked to thread]:
syscall.SyscallN(0x48f6800?, {0xc00239bc28?, 0x2?, 0x3f2ac28?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc00239bc80?, 0x60e656?, 0x4922b40?, 0xc00239bce8?, 0x6013bd?, 0x235b1be0a28?, 0x4d?, 0x20?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc0021985e7?, 0x219, 0x6a7f7f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc000671680?, {0xc0021985e7?, 0xc00239bf80?, 0xc002198000?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc000671680, {0xc0021985e7, 0x219, 0x219})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00090eca8, {0xc0021985e7?, 0x21e29e0?, 0xc00239be68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00266d020, {0x35a43e0, 0xc00090eca8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x35a4460, 0xc00266d020}, {0x35a43e0, 0xc00090eca8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc0001e3080?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3547
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 3519 [syscall, locked to thread]:
syscall.SyscallN(0x101?, {0xc0024fdc28?, 0x2084525?, 0xc002420340?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x10003bac05f?, 0x235f734a6c0?, 0x235b1be0a28?, 0xc0024fdce8?, 0x601265?, 0xc0026262c0?, 0x160?, 0xc0024fdce8?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc00271ba63?, 0x59d, 0x6a7f7f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc00244c280?, {0xc00271ba63?, 0x235f739f168?, 0xc00271b800?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc00244c280, {0xc00271ba63, 0x59d, 0x59d})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000d06540, {0xc00271ba63?, 0x3bac05f?, 0xc0024fde68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0008923c0, {0x35a43e0, 0xc000d06540})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x35a4460, 0xc0008923c0}, {0x35a43e0, 0xc000d06540}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc000154c80?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3502
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 3520 [syscall, locked to thread]:
syscall.SyscallN(0x48f8780?, {0xc00292fc28?, 0x2416c20?, 0x3f2ac28?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc00292fc80?, 0x60e656?, 0x4922b40?, 0xc00292fce8?, 0x6013bd?, 0x235b1be0eb8?, 0x77?, 0xc00292fd20?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc000898504?, 0x1afc, 0x6a7f7f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc00244c780?, {0xc000898504?, 0x2?, 0xc000896000?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc00244c780, {0xc000898504, 0x1afc, 0x1afc})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000d06570, {0xc000898504?, 0x3bac05f?, 0xc00292fe68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0008923f0, {0x35a43e0, 0xc000d06570})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x35a4460, 0xc0008923f0}, {0x35a43e0, 0xc000d06570}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc002ddde80?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3502
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 3521 [select]:
os/exec.(*Cmd).watchCtx(0xc0026262c0, 0xc00257a3c0)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3502
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                    

Test pass (157/202)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 16.24
4 TestDownloadOnly/v1.16.0/preload-exists 0.08
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.32
10 TestDownloadOnly/v1.28.4/json-events 12.4
11 TestDownloadOnly/v1.28.4/preload-exists 0
14 TestDownloadOnly/v1.28.4/kubectl 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.29
17 TestDownloadOnly/v1.29.0-rc.2/json-events 15.1
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.46
23 TestDownloadOnly/DeleteAll 1.49
24 TestDownloadOnly/DeleteAlwaysSucceeds 1.42
26 TestBinaryMirror 7.6
27 TestOffline 542.31
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.29
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.31
32 TestAddons/Setup 394.48
35 TestAddons/parallel/Ingress 68.77
36 TestAddons/parallel/InspektorGadget 28.24
37 TestAddons/parallel/MetricsServer 23.34
38 TestAddons/parallel/HelmTiller 32.76
40 TestAddons/parallel/CSI 101.68
41 TestAddons/parallel/Headlamp 37.84
42 TestAddons/parallel/CloudSpanner 22.17
43 TestAddons/parallel/LocalPath 96.8
44 TestAddons/parallel/NvidiaDevicePlugin 21.04
45 TestAddons/parallel/Yakd 5.34
48 TestAddons/serial/GCPAuth/Namespaces 0.39
49 TestAddons/StoppedEnableDisable 49.29
50 TestCertOptions 384.96
51 TestCertExpiration 746.26
52 TestDockerFlags 453.5
53 TestForceSystemdFlag 246.78
54 TestForceSystemdEnv 410.87
61 TestErrorSpam/start 17.79
62 TestErrorSpam/status 37.69
63 TestErrorSpam/pause 23.54
64 TestErrorSpam/unpause 23.36
65 TestErrorSpam/stop 52.85
68 TestFunctional/serial/CopySyncFile 0.03
69 TestFunctional/serial/StartWithProxy 206.17
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 117.24
72 TestFunctional/serial/KubeContext 0.16
73 TestFunctional/serial/KubectlGetPods 0.26
76 TestFunctional/serial/CacheCmd/cache/add_remote 27.08
77 TestFunctional/serial/CacheCmd/cache/add_local 10.8
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.29
79 TestFunctional/serial/CacheCmd/cache/list 0.28
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.68
81 TestFunctional/serial/CacheCmd/cache/cache_reload 37.66
82 TestFunctional/serial/CacheCmd/cache/delete 0.61
83 TestFunctional/serial/MinikubeKubectlCmd 0.55
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.71
85 TestFunctional/serial/ExtraConfig 128.26
86 TestFunctional/serial/ComponentHealth 0.2
87 TestFunctional/serial/LogsCmd 9.04
88 TestFunctional/serial/LogsFileCmd 11.01
89 TestFunctional/serial/InvalidService 21.84
95 TestFunctional/parallel/StatusCmd 42.83
99 TestFunctional/parallel/ServiceCmdConnect 45.42
100 TestFunctional/parallel/AddonsCmd 0.79
101 TestFunctional/parallel/PersistentVolumeClaim 47.86
103 TestFunctional/parallel/SSHCmd 22.78
104 TestFunctional/parallel/CpCmd 59.03
105 TestFunctional/parallel/MySQL 66.66
106 TestFunctional/parallel/FileSync 10.62
107 TestFunctional/parallel/CertSync 63.58
111 TestFunctional/parallel/NodeLabels 0.27
113 TestFunctional/parallel/NonActiveRuntimeDisabled 10.84
115 TestFunctional/parallel/License 2.95
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.9
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 30.73
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ServiceCmd/DeployApp 8.47
128 TestFunctional/parallel/ServiceCmd/List 14.41
129 TestFunctional/parallel/ProfileCmd/profile_not_create 9.57
130 TestFunctional/parallel/ProfileCmd/profile_list 9.31
131 TestFunctional/parallel/ServiceCmd/JSONOutput 14.36
132 TestFunctional/parallel/ProfileCmd/profile_json_output 9.38
135 TestFunctional/parallel/DockerEnv/powershell 44.92
137 TestFunctional/parallel/UpdateContextCmd/no_changes 3.3
138 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.72
139 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.67
140 TestFunctional/parallel/Version/short 0.29
141 TestFunctional/parallel/Version/components 8.29
142 TestFunctional/parallel/ImageCommands/ImageListShort 8.2
143 TestFunctional/parallel/ImageCommands/ImageListTable 8
144 TestFunctional/parallel/ImageCommands/ImageListJson 8
145 TestFunctional/parallel/ImageCommands/ImageListYaml 8.14
146 TestFunctional/parallel/ImageCommands/ImageBuild 28.74
147 TestFunctional/parallel/ImageCommands/Setup 4.07
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 24.18
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 18.66
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 24.91
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 9.24
152 TestFunctional/parallel/ImageCommands/ImageRemove 15
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 17.28
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 9.84
155 TestFunctional/delete_addon-resizer_images 0.52
156 TestFunctional/delete_my-image_image 0.2
157 TestFunctional/delete_minikube_cached_images 0.2
161 TestImageBuild/serial/Setup 199.88
162 TestImageBuild/serial/NormalBuild 9.59
163 TestImageBuild/serial/BuildWithBuildArg 9.37
164 TestImageBuild/serial/BuildWithDockerIgnore 7.8
165 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.67
168 TestIngressAddonLegacy/StartLegacyK8sCluster 244.92
170 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 39.84
171 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 14.78
172 TestIngressAddonLegacy/serial/ValidateIngressAddons 92.35
175 TestJSONOutput/start/Command 205.52
176 TestJSONOutput/start/Audit 0
178 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Command 7.95
182 TestJSONOutput/pause/Audit 0
184 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Command 7.8
188 TestJSONOutput/unpause/Audit 0
190 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/stop/Command 29.25
194 TestJSONOutput/stop/Audit 0
196 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
198 TestErrorJSONOutput 1.62
203 TestMainNoArgs 0.26
204 TestMinikubeProfile 507.44
207 TestMountStart/serial/StartWithMountFirst 150.67
208 TestMountStart/serial/VerifyMountFirst 9.68
209 TestMountStart/serial/StartWithMountSecond 151.31
210 TestMountStart/serial/VerifyMountSecond 9.63
211 TestMountStart/serial/DeleteFirst 26.72
212 TestMountStart/serial/VerifyMountPostDelete 9.65
213 TestMountStart/serial/Stop 22.06
214 TestMountStart/serial/RestartStopped 112.87
215 TestMountStart/serial/VerifyMountPostStop 9.67
218 TestMultiNode/serial/FreshStart2Nodes 426.11
219 TestMultiNode/serial/DeployApp2Nodes 10.1
221 TestMultiNode/serial/AddNode 221.75
222 TestMultiNode/serial/MultiNodeLabels 0.22
223 TestMultiNode/serial/ProfileList 7.75
224 TestMultiNode/serial/CopyFile 366.26
225 TestMultiNode/serial/StopNode 72.09
226 TestMultiNode/serial/StartAfterStop 173.63
231 TestPreload 462.63
232 TestScheduledStopWindows 330.89
239 TestKubernetesUpgrade 1107.28
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.43
255 TestStoppedBinaryUpgrade/Setup 0.53
265 TestPause/serial/Start 341.73
267 TestStoppedBinaryUpgrade/MinikubeLogs 10.43
x
+
TestDownloadOnly/v1.16.0/json-events (16.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-253200 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-253200 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv: (16.2380654s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (16.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-253200
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-253200: exit status 85 (323.9113ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-253200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:46 UTC |          |
	|         | -p download-only-253200        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 21:46:04
	Running on machine: minikube1
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 21:46:04.739364   11296 out.go:296] Setting OutFile to fd 588 ...
	I1226 21:46:04.740241   11296 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:46:04.740241   11296 out.go:309] Setting ErrFile to fd 592...
	I1226 21:46:04.740241   11296 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1226 21:46:04.755469   11296 root.go:314] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1226 21:46:04.766771   11296 out.go:303] Setting JSON to true
	I1226 21:46:04.769723   11296 start.go:128] hostinfo: {"hostname":"minikube1","uptime":1563,"bootTime":1703625601,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1226 21:46:04.769723   11296 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 21:46:04.776257   11296 out.go:97] [download-only-253200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	W1226 21:46:04.776933   11296 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1226 21:46:04.776933   11296 notify.go:220] Checking for updates...
	I1226 21:46:04.779721   11296 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 21:46:04.782567   11296 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1226 21:46:04.785062   11296 out.go:169] MINIKUBE_LOCATION=17857
	I1226 21:46:04.787438   11296 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1226 21:46:04.792990   11296 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1226 21:46:04.794335   11296 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 21:46:10.497519   11296 out.go:97] Using the hyperv driver based on user configuration
	I1226 21:46:10.497681   11296 start.go:298] selected driver: hyperv
	I1226 21:46:10.497681   11296 start.go:902] validating driver "hyperv" against <nil>
	I1226 21:46:10.498179   11296 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 21:46:10.549793   11296 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I1226 21:46:10.550514   11296 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1226 21:46:10.550514   11296 cni.go:84] Creating CNI manager for ""
	I1226 21:46:10.550514   11296 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1226 21:46:10.550514   11296 start_flags.go:323] config:
	{Name:download-only-253200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-253200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:46:10.551461   11296 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 21:46:10.555153   11296 out.go:97] Downloading VM boot image ...
	I1226 21:46:10.555153   11296 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.32.1-1702708929-17806-amd64.iso
	I1226 21:46:13.920385   11296 out.go:97] Starting control plane node download-only-253200 in cluster download-only-253200
	I1226 21:46:13.920385   11296 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1226 21:46:13.956367   11296 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1226 21:46:13.956932   11296 cache.go:56] Caching tarball of preloaded images
	I1226 21:46:13.957496   11296 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1226 21:46:13.960661   11296 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1226 21:46:13.960661   11296 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1226 21:46:14.031933   11296 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1226 21:46:17.729978   11296 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1226 21:46:17.731027   11296 preload.go:256] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1226 21:46:18.784264   11296 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1226 21:46:18.785775   11296 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-253200\config.json ...
	I1226 21:46:18.785897   11296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-253200\config.json: {Name:mke2e24eee479c3e8f9cb9420932a71320f86a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:46:18.787450   11296 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1226 21:46:18.788899   11296 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.16.0/kubectl.exe
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-253200"

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 21:46:20.979395    7848 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (12.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-253200 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-253200 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv: (12.4033666s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (12.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-253200
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-253200: exit status 85 (293.515ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-253200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:46 UTC |          |
	|         | -p download-only-253200        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	| start   | -o=json --download-only        | download-only-253200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:46 UTC |          |
	|         | -p download-only-253200        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 21:46:21
	Running on machine: minikube1
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 21:46:21.374638    3656 out.go:296] Setting OutFile to fd 592 ...
	I1226 21:46:21.375364    3656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:46:21.375364    3656 out.go:309] Setting ErrFile to fd 604...
	I1226 21:46:21.375364    3656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1226 21:46:21.389271    3656 root.go:314] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I1226 21:46:21.398420    3656 out.go:303] Setting JSON to true
	I1226 21:46:21.402913    3656 start.go:128] hostinfo: {"hostname":"minikube1","uptime":1580,"bootTime":1703625601,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1226 21:46:21.402993    3656 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 21:46:21.407485    3656 out.go:97] [download-only-253200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1226 21:46:21.410062    3656 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 21:46:21.407682    3656 notify.go:220] Checking for updates...
	I1226 21:46:21.414445    3656 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1226 21:46:21.417814    3656 out.go:169] MINIKUBE_LOCATION=17857
	I1226 21:46:21.420421    3656 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1226 21:46:21.425493    3656 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1226 21:46:21.426722    3656 config.go:182] Loaded profile config "download-only-253200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1226 21:46:21.426894    3656 start.go:810] api.Load failed for download-only-253200: filestore "download-only-253200": Docker machine "download-only-253200" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1226 21:46:21.426894    3656 driver.go:392] Setting default libvirt URI to qemu:///system
	W1226 21:46:21.427425    3656 start.go:810] api.Load failed for download-only-253200: filestore "download-only-253200": Docker machine "download-only-253200" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1226 21:46:27.108540    3656 out.go:97] Using the hyperv driver based on existing profile
	I1226 21:46:27.108540    3656 start.go:298] selected driver: hyperv
	I1226 21:46:27.108540    3656 start.go:902] validating driver "hyperv" against &{Name:download-only-253200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-253200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:46:27.160515    3656 cni.go:84] Creating CNI manager for ""
	I1226 21:46:27.160515    3656 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1226 21:46:27.160515    3656 start_flags.go:323] config:
	{Name:download-only-253200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-253200 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:46:27.161344    3656 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 21:46:27.165531    3656 out.go:97] Starting control plane node download-only-253200 in cluster download-only-253200
	I1226 21:46:27.165667    3656 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 21:46:27.206891    3656 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1226 21:46:27.206891    3656 cache.go:56] Caching tarball of preloaded images
	I1226 21:46:27.208095    3656 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1226 21:46:27.211758    3656 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1226 21:46:27.211758    3656 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1226 21:46:27.285800    3656 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1226 21:46:30.807237    3656 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1226 21:46:30.808154    3656 preload.go:256] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-253200"

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 21:46:33.705714    2952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (15.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-253200 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-253200 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv: (15.1030276s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (15.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-253200
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-253200: exit status 85 (454.6229ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-253200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:46 UTC |          |
	|         | -p download-only-253200           |                      |                   |         |                     |          |
	|         | --force --alsologtostderr         |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |                   |         |                     |          |
	|         | --container-runtime=docker        |                      |                   |         |                     |          |
	|         | --driver=hyperv                   |                      |                   |         |                     |          |
	| start   | -o=json --download-only           | download-only-253200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:46 UTC |          |
	|         | -p download-only-253200           |                      |                   |         |                     |          |
	|         | --force --alsologtostderr         |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |                   |         |                     |          |
	|         | --container-runtime=docker        |                      |                   |         |                     |          |
	|         | --driver=hyperv                   |                      |                   |         |                     |          |
	| start   | -o=json --download-only           | download-only-253200 | minikube1\jenkins | v1.32.0 | 26 Dec 23 21:46 UTC |          |
	|         | -p download-only-253200           |                      |                   |         |                     |          |
	|         | --force --alsologtostderr         |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |                   |         |                     |          |
	|         | --container-runtime=docker        |                      |                   |         |                     |          |
	|         | --driver=hyperv                   |                      |                   |         |                     |          |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 21:46:34
	Running on machine: minikube1
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 21:46:34.098314    8548 out.go:296] Setting OutFile to fd 640 ...
	I1226 21:46:34.098923    8548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:46:34.099016    8548 out.go:309] Setting ErrFile to fd 672...
	I1226 21:46:34.099016    8548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1226 21:46:34.118254    8548 root.go:314] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I1226 21:46:34.136201    8548 out.go:303] Setting JSON to true
	I1226 21:46:34.140282    8548 start.go:128] hostinfo: {"hostname":"minikube1","uptime":1592,"bootTime":1703625601,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1226 21:46:34.140893    8548 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 21:46:34.359174    8548 out.go:97] [download-only-253200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1226 21:46:34.359974    8548 notify.go:220] Checking for updates...
	I1226 21:46:34.363091    8548 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 21:46:34.366475    8548 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1226 21:46:34.368838    8548 out.go:169] MINIKUBE_LOCATION=17857
	I1226 21:46:34.371974    8548 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1226 21:46:34.376934    8548 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1226 21:46:34.377635    8548 config.go:182] Loaded profile config "download-only-253200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1226 21:46:34.378443    8548 start.go:810] api.Load failed for download-only-253200: filestore "download-only-253200": Docker machine "download-only-253200" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1226 21:46:34.378635    8548 driver.go:392] Setting default libvirt URI to qemu:///system
	W1226 21:46:34.378905    8548 start.go:810] api.Load failed for download-only-253200: filestore "download-only-253200": Docker machine "download-only-253200" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1226 21:46:40.023108    8548 out.go:97] Using the hyperv driver based on existing profile
	I1226 21:46:40.023108    8548 start.go:298] selected driver: hyperv
	I1226 21:46:40.023819    8548 start.go:902] validating driver "hyperv" against &{Name:download-only-253200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-253200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:46:40.077508    8548 cni.go:84] Creating CNI manager for ""
	I1226 21:46:40.077508    8548 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1226 21:46:40.077508    8548 start_flags.go:323] config:
	{Name:download-only-253200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-253200 Namespa
ce:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:46:40.077508    8548 iso.go:125] acquiring lock: {Name:mk6e44fd4f974e035b521383471f58bfbae3f4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 21:46:40.081271    8548 out.go:97] Starting control plane node download-only-253200 in cluster download-only-253200
	I1226 21:46:40.081271    8548 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1226 21:46:40.128297    8548 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1226 21:46:40.128687    8548 cache.go:56] Caching tarball of preloaded images
	I1226 21:46:40.128968    8548 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1226 21:46:40.132375    8548 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1226 21:46:40.132375    8548 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1226 21:46:40.204097    8548 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:74b99cd9fa76659778caad266ad399ba -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1226 21:46:43.635083    8548 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1226 21:46:43.636088    8548 preload.go:256] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-253200"

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 21:46:49.115222   13888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.46s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.49s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.4915598s)
--- PASS: TestDownloadOnly/DeleteAll (1.49s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (1.42s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-253200
aaa_download_only_test.go:202: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-253200: (1.4214115s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (1.42s)

                                                
                                    
x
+
TestBinaryMirror (7.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-481600 --alsologtostderr --binary-mirror http://127.0.0.1:60239 --driver=hyperv
aaa_download_only_test.go:307: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-481600 --alsologtostderr --binary-mirror http://127.0.0.1:60239 --driver=hyperv: (6.6536521s)
helpers_test.go:175: Cleaning up "binary-mirror-481600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-481600
--- PASS: TestBinaryMirror (7.60s)

                                                
                                    
x
+
TestOffline (542.31s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-152600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-152600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (8m13.2963304s)
helpers_test.go:175: Cleaning up "offline-docker-152600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-152600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-152600: (49.0141827s)
--- PASS: TestOffline (542.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-839600
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-839600: exit status 85 (294.5675ms)

                                                
                                                
-- stdout --
	* Profile "addons-839600" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-839600"

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 21:47:01.442393    6968 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.31s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-839600
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-839600: exit status 85 (310.4666ms)

                                                
                                                
-- stdout --
	* Profile "addons-839600" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-839600"

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 21:47:01.440940   14820 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.31s)

                                                
                                    
x
+
TestAddons/Setup (394.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-839600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-839600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m34.483106s)
--- PASS: TestAddons/Setup (394.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (68.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-839600 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-839600 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-839600 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6bc6c165-c33c-4a4b-80ec-9628e101a363] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6bc6c165-c33c-4a4b-80ec-9628e101a363] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.0205595s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839600 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839600 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.1395829s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-839600 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W1226 21:55:09.305032    2548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-839600 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839600 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839600 ip: (2.5621078s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.21.177.30
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839600 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839600 addons disable ingress-dns --alsologtostderr -v=1: (16.2394932s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839600 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839600 addons disable ingress --alsologtostderr -v=1: (22.2479259s)
--- PASS: TestAddons/parallel/Ingress (68.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (28.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jlhnb" [8b95f9c2-5246-4ac6-917b-84f311775c36] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0086731s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-839600
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-839600: (22.2233306s)
--- PASS: TestAddons/parallel/InspektorGadget (28.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (23.34s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 33.3142ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-dbh5s" [95643875-8638-4f82-8665-c8b8e55c291e] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0217301s
addons_test.go:415: (dbg) Run:  kubectl --context addons-839600 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839600 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839600 addons disable metrics-server --alsologtostderr -v=1: (17.0629559s)
--- PASS: TestAddons/parallel/MetricsServer (23.34s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (32.76s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 9.2401ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-4xqr4" [6f66b298-6de4-4f37-9f94-b229d92fe409] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.016104s
addons_test.go:473: (dbg) Run:  kubectl --context addons-839600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-839600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.371222s)
addons_test.go:478: kubectl --context addons-839600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:473: (dbg) Run:  kubectl --context addons-839600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-839600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.8091692s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839600 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839600 addons disable helm-tiller --alsologtostderr -v=1: (14.9851859s)
--- PASS: TestAddons/parallel/HelmTiller (32.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (101.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 53.7197ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-839600 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-839600 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1676afa0-b387-4097-b097-3b9dafada9ad] Pending
helpers_test.go:344: "task-pv-pod" [1676afa0-b387-4097-b097-3b9dafada9ad] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1676afa0-b387-4097-b097-3b9dafada9ad] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 25.0126695s
addons_test.go:584: (dbg) Run:  kubectl --context addons-839600 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-839600 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-839600 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-839600 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-839600 delete pod task-pv-pod: (1.189147s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-839600 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-839600 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-839600 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [888759ef-9dea-4a06-ac1d-a1a36b93e280] Pending
helpers_test.go:344: "task-pv-pod-restore" [888759ef-9dea-4a06-ac1d-a1a36b93e280] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [888759ef-9dea-4a06-ac1d-a1a36b93e280] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0201241s
addons_test.go:626: (dbg) Run:  kubectl --context addons-839600 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-839600 delete pod task-pv-pod-restore: (1.2387984s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-839600 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-839600 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839600 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839600 addons disable csi-hostpath-driver --alsologtostderr -v=1: (23.3029199s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839600 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839600 addons disable volumesnapshots --alsologtostderr -v=1: (15.8675464s)
--- PASS: TestAddons/parallel/CSI (101.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (37.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-839600 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-839600 --alsologtostderr -v=1: (16.8174608s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-rx82f" [b2d7238d-8a9c-4d0b-af65-ba200fa43072] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-rx82f" [b2d7238d-8a9c-4d0b-af65-ba200fa43072] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-rx82f" [b2d7238d-8a9c-4d0b-af65-ba200fa43072] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 21.0150482s
--- PASS: TestAddons/parallel/Headlamp (37.84s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (22.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-z5tw5" [0c4b78f0-77cc-4191-b85e-202dd237f918] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0201494s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-839600
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-839600: (17.1394901s)
--- PASS: TestAddons/parallel/CloudSpanner (22.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (96.8s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-839600 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-839600 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-839600 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d30f082f-0fb6-46ee-8808-b4b64e0c6459] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d30f082f-0fb6-46ee-8808-b4b64e0c6459] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d30f082f-0fb6-46ee-8808-b4b64e0c6459] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0102301s
addons_test.go:891: (dbg) Run:  kubectl --context addons-839600 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839600 ssh "cat /opt/local-path-provisioner/pvc-861b55a7-d7ac-4486-8979-8c51e4270cae_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839600 ssh "cat /opt/local-path-provisioner/pvc-861b55a7-d7ac-4486-8979-8c51e4270cae_default_test-pvc/file1": (10.4557344s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-839600 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-839600 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-839600 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-839600 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m3.4683698s)
--- PASS: TestAddons/parallel/LocalPath (96.80s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (21.04s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2fkmh" [cc4602ed-0428-4409-a78b-30d70e22826f] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0167514s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-839600
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-839600: (16.0182586s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (21.04s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-45j2w" [575ef3a7-f82d-4222-9197-8a6386b8c2fe] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.3380042s
--- PASS: TestAddons/parallel/Yakd (5.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.39s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-839600 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-839600 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.39s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (49.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-839600
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-839600: (35.9994736s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-839600
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-839600: (5.4193251s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-839600
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-839600: (4.9679107s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-839600
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-839600: (2.906124s)
--- PASS: TestAddons/StoppedEnableDisable (49.29s)

                                                
                                    
x
+
TestCertOptions (384.96s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-724600 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
E1226 23:54:01.503000   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 23:54:59.385233   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-724600 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (5m24.7022287s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-724600 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-724600 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (9.7166321s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-724600 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-724600 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-724600 -- "sudo cat /etc/kubernetes/admin.conf": (10.1658772s)
helpers_test.go:175: Cleaning up "cert-options-724600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-724600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-724600: (40.1845898s)
--- PASS: TestCertOptions (384.96s)

                                                
                                    
x
+
TestCertExpiration (746.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-721200 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-721200 --memory=2048 --cert-expiration=3m --driver=hyperv: (5m44.1262287s)
E1226 23:47:04.731209   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 23:48:36.124517   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 23:49:01.501269   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-721200 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-721200 --memory=2048 --cert-expiration=8760h --driver=hyperv: (2m50.8876511s)
helpers_test.go:175: Cleaning up "cert-expiration-721200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-721200
E1226 23:53:36.116138   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-721200: (51.2433738s)
--- PASS: TestCertExpiration (746.26s)

                                                
                                    
x
+
TestDockerFlags (453.5s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-107900 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-107900 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (6m29.3668175s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-107900 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-107900 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.0772555s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-107900 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-107900 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (11.1910734s)
helpers_test.go:175: Cleaning up "docker-flags-107900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-107900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-107900: (42.8645352s)
--- PASS: TestDockerFlags (453.50s)

                                                
                                    
x
+
TestForceSystemdFlag (246.78s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-721200 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-721200 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (3m19.043433s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-721200 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-721200 ssh "docker info --format {{.CgroupDriver}}": (10.2509853s)
helpers_test.go:175: Cleaning up "force-systemd-flag-721200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-721200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-721200: (37.4878639s)
--- PASS: TestForceSystemdFlag (246.78s)

                                                
                                    
x
+
TestForceSystemdEnv (410.87s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-164200 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
E1226 23:46:05.428916   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-164200 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (6m3.0818256s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-164200 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-164200 ssh "docker info --format {{.CgroupDriver}}": (11.4742486s)
helpers_test.go:175: Cleaning up "force-systemd-env-164200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-164200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-164200: (36.3156992s)
--- PASS: TestForceSystemdEnv (410.87s)

                                                
                                    
x
+
TestErrorSpam/start (17.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 start --dry-run: (5.9503628s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 start --dry-run: (5.9191576s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 start --dry-run: (5.9164845s)
--- PASS: TestErrorSpam/start (17.79s)

                                                
                                    
x
+
TestErrorSpam/status (37.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 status: (12.8861784s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 status: (12.4194759s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 status: (12.3815427s)
--- PASS: TestErrorSpam/status (37.69s)

                                                
                                    
x
+
TestErrorSpam/pause (23.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 pause: (8.0321259s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 pause: (7.816968s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 pause: (7.6850459s)
--- PASS: TestErrorSpam/pause (23.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (23.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 unpause: (7.8782064s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 unpause: (7.7142247s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 unpause: (7.7597417s)
--- PASS: TestErrorSpam/unpause (23.36s)

                                                
                                    
x
+
TestErrorSpam/stop (52.85s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 stop
E1226 22:03:36.126731   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 stop: (34.7613238s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 stop: (9.4114742s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-211800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-211800 stop: (8.6746961s)
--- PASS: TestErrorSpam/stop (52.85s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\10728\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (206.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-796600 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
functional_test.go:2233: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-796600 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m26.1560216s)
--- PASS: TestFunctional/serial/StartWithProxy (206.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (117.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-796600 --alsologtostderr -v=8
E1226 22:08:36.125270   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-796600 --alsologtostderr -v=8: (1m57.2422937s)
functional_test.go:659: soft start took 1m57.2434248s for "functional-796600" cluster.
--- PASS: TestFunctional/serial/SoftStart (117.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.16s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-796600 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (27.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 cache add registry.k8s.io/pause:3.1: (9.1916742s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 cache add registry.k8s.io/pause:3.3: (8.9153701s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 cache add registry.k8s.io/pause:latest: (8.9673238s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (27.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (10.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-796600 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local735955816\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-796600 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local735955816\001: (1.8468846s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 cache add minikube-local-cache-test:functional-796600
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 cache add minikube-local-cache-test:functional-796600: (8.4010902s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 cache delete minikube-local-cache-test:functional-796600
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-796600
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (10.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 ssh sudo crictl images: (9.6840385s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (37.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.7140055s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-796600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.632418s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 22:10:38.020108    3464 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 cache reload: (8.5684986s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.740224s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (37.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.61s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 kubectl -- --context functional-796600 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out\kubectl.exe --context functional-796600 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.71s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (128.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-796600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-796600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m8.2548383s)
functional_test.go:757: restart took 2m8.2551133s for "functional-796600" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (128.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-796600 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (9.04s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 logs: (9.0432497s)
--- PASS: TestFunctional/serial/LogsCmd (9.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (11.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1051360088\001\logs.txt
E1226 22:13:36.120075   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1051360088\001\logs.txt: (11.0067695s)
--- PASS: TestFunctional/serial/LogsFileCmd (11.01s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (21.84s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-796600 apply -f testdata\invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-796600
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-796600: exit status 115 (17.243926s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://172.21.180.84:32737 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 22:13:40.760573   10812 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_service_f513297bf07cd3fd942cead3a34f1b094af52476_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-796600 delete -f testdata\invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-796600 delete -f testdata\invalidsvc.yaml: (1.1610305s)
--- PASS: TestFunctional/serial/InvalidService (21.84s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (42.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 status: (14.2532281s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (15.4141915s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 status -o json: (13.1612982s)
--- PASS: TestFunctional/parallel/StatusCmd (42.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (45.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-796600 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-796600 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-hlfg7" [3f9f76a7-b79a-4841-a7fc-28621770e82b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-hlfg7" [3f9f76a7-b79a-4841-a7fc-28621770e82b] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 25.0207584s
functional_test.go:1648: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 service hello-node-connect --url
functional_test.go:1648: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 service hello-node-connect --url: (19.9817359s)
functional_test.go:1654: found endpoint for hello-node-connect: http://172.21.180.84:32530
functional_test.go:1674: http://172.21.180.84:32530: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-hlfg7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.21.180.84:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.21.180.84:32530
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (45.42s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b5e46b0c-d470-41dc-bd73-0b918ed30ce2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0196734s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-796600 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-796600 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-796600 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-796600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b2e78cb7-f5df-4cc3-9e51-9f5a27e7f638] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b2e78cb7-f5df-4cc3-9e51-9f5a27e7f638] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.0180594s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-796600 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-796600 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-796600 delete -f testdata/storage-provisioner/pod.yaml: (2.3747057s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-796600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2fb7cfc8-297b-4d88-98f7-6d48becce5c5] Pending
helpers_test.go:344: "sp-pod" [2fb7cfc8-297b-4d88-98f7-6d48becce5c5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2fb7cfc8-297b-4d88-98f7-6d48becce5c5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.0190247s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-796600 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.86s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (22.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh "echo hello"
functional_test.go:1724: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 ssh "echo hello": (12.2414555s)
functional_test.go:1741: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh "cat /etc/hostname"
functional_test.go:1741: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 ssh "cat /etc/hostname": (10.5371324s)
--- PASS: TestFunctional/parallel/SSHCmd (22.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (59.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 cp testdata\cp-test.txt /home/docker/cp-test.txt: (9.5689869s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh -n functional-796600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 ssh -n functional-796600 "sudo cat /home/docker/cp-test.txt": (11.3060169s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 cp functional-796600:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd3775837609\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 cp functional-796600:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd3775837609\001\cp-test.txt: (9.9346877s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh -n functional-796600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 ssh -n functional-796600 "sudo cat /home/docker/cp-test.txt": (9.8143197s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (7.4445363s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh -n functional-796600 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 ssh -n functional-796600 "sudo cat /tmp/does/not/exist/cp-test.txt": (10.9515543s)
--- PASS: TestFunctional/parallel/CpCmd (59.03s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (66.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-796600 replace --force -f testdata\mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-t5gzv" [734ea1a4-faeb-4fbe-b52a-f6ec28062e05] Pending
helpers_test.go:344: "mysql-859648c796-t5gzv" [734ea1a4-faeb-4fbe-b52a-f6ec28062e05] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-t5gzv" [734ea1a4-faeb-4fbe-b52a-f6ec28062e05] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 52.0160319s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-796600 exec mysql-859648c796-t5gzv -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-796600 exec mysql-859648c796-t5gzv -- mysql -ppassword -e "show databases;": exit status 1 (322.2069ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-796600 exec mysql-859648c796-t5gzv -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-796600 exec mysql-859648c796-t5gzv -- mysql -ppassword -e "show databases;": exit status 1 (362.235ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-796600 exec mysql-859648c796-t5gzv -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-796600 exec mysql-859648c796-t5gzv -- mysql -ppassword -e "show databases;": exit status 1 (369.5315ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-796600 exec mysql-859648c796-t5gzv -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-796600 exec mysql-859648c796-t5gzv -- mysql -ppassword -e "show databases;": exit status 1 (365.8451ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-796600 exec mysql-859648c796-t5gzv -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-796600 exec mysql-859648c796-t5gzv -- mysql -ppassword -e "show databases;": exit status 1 (368.0854ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-796600 exec mysql-859648c796-t5gzv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (66.66s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (10.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/10728/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh "sudo cat /etc/test/nested/copy/10728/hosts"
functional_test.go:1930: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 ssh "sudo cat /etc/test/nested/copy/10728/hosts": (10.6195038s)
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (10.62s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (63.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/10728.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh "sudo cat /etc/ssl/certs/10728.pem"
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 ssh "sudo cat /etc/ssl/certs/10728.pem": (11.5240424s)
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/10728.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh "sudo cat /usr/share/ca-certificates/10728.pem"
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 ssh "sudo cat /usr/share/ca-certificates/10728.pem": (10.2424198s)
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 ssh "sudo cat /etc/ssl/certs/51391683.0": (10.2362997s)
functional_test.go:1998: Checking for existence of /etc/ssl/certs/107282.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh "sudo cat /etc/ssl/certs/107282.pem"
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 ssh "sudo cat /etc/ssl/certs/107282.pem": (10.4925971s)
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/107282.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh "sudo cat /usr/share/ca-certificates/107282.pem"
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 ssh "sudo cat /usr/share/ca-certificates/107282.pem": (10.5009287s)
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.577088s)
--- PASS: TestFunctional/parallel/CertSync (63.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-796600 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (10.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-796600 ssh "sudo systemctl is-active crio": exit status 1 (10.8401913s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 22:16:25.951777   14652 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (10.84s)

                                                
                                    
x
+
TestFunctional/parallel/License (2.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2287: (dbg) Done: out/minikube-windows-amd64.exe license: (2.9395518s)
--- PASS: TestFunctional/parallel/License (2.95s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-796600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-796600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-796600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 9656: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 10200: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-796600 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.90s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-796600 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (30.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-796600 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fd466db6-86d6-4594-9c80-ffe02f1ee681] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [fd466db6-86d6-4594-9c80-ffe02f1ee681] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 30.0201427s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (30.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-796600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 9252: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-796600 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-796600 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-j56hf" [b73c4a90-dcea-4d3a-b1b3-5eaefab5f900] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-j56hf" [b73c4a90-dcea-4d3a-b1b3-5eaefab5f900] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.0111411s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 service list
functional_test.go:1458: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 service list: (14.4083894s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (9.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1274: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (8.9957407s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (9.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (9.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-windows-amd64.exe profile list
E1226 22:14:59.335275   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
functional_test.go:1309: (dbg) Done: out/minikube-windows-amd64.exe profile list: (8.9869154s)
functional_test.go:1314: Took "8.9873034s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1328: Took "325.9229ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (9.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 service list -o json: (14.3580043s)
functional_test.go:1493: Took "14.3584646s" to run "out/minikube-windows-amd64.exe -p functional-796600 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (9.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1360: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (9.0521005s)
functional_test.go:1365: Took "9.0522283s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1378: Took "312.4453ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (9.38s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (44.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-796600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-796600"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-796600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-796600": (29.0589481s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-796600 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-796600 docker-env | Invoke-Expression ; docker images": (15.8378442s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (44.92s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (3.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 update-context --alsologtostderr -v=2: (3.2952508s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (3.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 update-context --alsologtostderr -v=2: (2.7135174s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 update-context --alsologtostderr -v=2: (2.6652149s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 version --short
--- PASS: TestFunctional/parallel/Version/short (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 version -o=json --components: (8.2916853s)
--- PASS: TestFunctional/parallel/Version/components (8.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image ls --format short --alsologtostderr: (8.2032441s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-796600 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-796600
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-796600
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-796600 image ls --format short --alsologtostderr:
W1226 22:18:38.636529    9208 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1226 22:18:38.746014    9208 out.go:296] Setting OutFile to fd 812 ...
I1226 22:18:38.747002    9208 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:18:38.747002    9208 out.go:309] Setting ErrFile to fd 1032...
I1226 22:18:38.747002    9208 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:18:38.765001    9208 config.go:182] Loaded profile config "functional-796600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 22:18:38.765001    9208 config.go:182] Loaded profile config "functional-796600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 22:18:38.766008    9208 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-796600 ).state
I1226 22:18:41.254759    9208 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1226 22:18:41.254759    9208 main.go:141] libmachine: [stderr =====>] : 
I1226 22:18:41.271569    9208 ssh_runner.go:195] Run: systemctl --version
I1226 22:18:41.271569    9208 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-796600 ).state
I1226 22:18:43.753687    9208 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1226 22:18:43.756593    9208 main.go:141] libmachine: [stderr =====>] : 
I1226 22:18:43.756690    9208 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-796600 ).networkadapters[0]).ipaddresses[0]
I1226 22:18:46.490877    9208 main.go:141] libmachine: [stdout =====>] : 172.21.180.84

                                                
                                                
I1226 22:18:46.490949    9208 main.go:141] libmachine: [stderr =====>] : 
I1226 22:18:46.491201    9208 sshutil.go:53] new ssh client: &{IP:172.21.180.84 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-796600\id_rsa Username:docker}
I1226 22:18:46.607021    9208 ssh_runner.go:235] Completed: systemctl --version: (5.3354521s)
I1226 22:18:46.623000    9208 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image ls --format table --alsologtostderr: (7.9963142s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-796600 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/library/minikube-local-cache-test | functional-796600 | 9e5be65916d45 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| docker.io/library/nginx                     | latest            | d453dd892d935 | 187MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | alpine            | 529b5644c430c | 42.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/google-containers/addon-resizer      | functional-796600 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-796600 image ls --format table --alsologtostderr:
W1226 22:18:46.831928   11916 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1226 22:18:46.916940   11916 out.go:296] Setting OutFile to fd 860 ...
I1226 22:18:46.932880   11916 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:18:46.932954   11916 out.go:309] Setting ErrFile to fd 808...
I1226 22:18:46.932954   11916 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:18:46.957320   11916 config.go:182] Loaded profile config "functional-796600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 22:18:46.958078   11916 config.go:182] Loaded profile config "functional-796600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 22:18:46.958751   11916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-796600 ).state
I1226 22:18:49.315840   11916 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1226 22:18:49.315840   11916 main.go:141] libmachine: [stderr =====>] : 
I1226 22:18:49.328840   11916 ssh_runner.go:195] Run: systemctl --version
I1226 22:18:49.328840   11916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-796600 ).state
I1226 22:18:51.689715   11916 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1226 22:18:51.689715   11916 main.go:141] libmachine: [stderr =====>] : 
I1226 22:18:51.689715   11916 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-796600 ).networkadapters[0]).ipaddresses[0]
I1226 22:18:54.469328   11916 main.go:141] libmachine: [stdout =====>] : 172.21.180.84

                                                
                                                
I1226 22:18:54.469491   11916 main.go:141] libmachine: [stderr =====>] : 
I1226 22:18:54.469730   11916 sshutil.go:53] new ssh client: &{IP:172.21.180.84 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-796600\id_rsa Username:docker}
I1226 22:18:54.593611   11916 ssh_runner.go:235] Completed: systemctl --version: (5.264771s)
I1226 22:18:54.603604   11916 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (8.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image ls --format json --alsologtostderr: (7.9955137s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-796600 image ls --format json --alsologtostderr:
[{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-796600"],"size":"32900000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a
302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"9e5be65916d4592a04982c0b805b469eb28b4700c2c1ab94bb29375748f13437","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-796600"],"size":"30"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s
.io/pause:3.3"],"size":"683000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-796600 image ls --format json --alsologtostderr:
W1226 22:18:46.783602    4392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1226 22:18:46.870927    4392 out.go:296] Setting OutFile to fd 1532 ...
I1226 22:18:46.885925    4392 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:18:46.885925    4392 out.go:309] Setting ErrFile to fd 1248...
I1226 22:18:46.885925    4392 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:18:46.901926    4392 config.go:182] Loaded profile config "functional-796600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 22:18:46.902925    4392 config.go:182] Loaded profile config "functional-796600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 22:18:46.902925    4392 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-796600 ).state
I1226 22:18:49.239483    4392 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1226 22:18:49.239483    4392 main.go:141] libmachine: [stderr =====>] : 
I1226 22:18:49.257820    4392 ssh_runner.go:195] Run: systemctl --version
I1226 22:18:49.257889    4392 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-796600 ).state
I1226 22:18:51.611959    4392 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1226 22:18:51.612120    4392 main.go:141] libmachine: [stderr =====>] : 
I1226 22:18:51.612120    4392 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-796600 ).networkadapters[0]).ipaddresses[0]
I1226 22:18:54.421851    4392 main.go:141] libmachine: [stdout =====>] : 172.21.180.84

                                                
                                                
I1226 22:18:54.421936    4392 main.go:141] libmachine: [stderr =====>] : 
I1226 22:18:54.422020    4392 sshutil.go:53] new ssh client: &{IP:172.21.180.84 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-796600\id_rsa Username:docker}
I1226 22:18:54.542535    4392 ssh_runner.go:235] Completed: systemctl --version: (5.2847156s)
I1226 22:18:54.553531    4392 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (8.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (8.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image ls --format yaml --alsologtostderr: (8.1410624s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-796600 image ls --format yaml --alsologtostderr:
- id: 9e5be65916d4592a04982c0b805b469eb28b4700c2c1ab94bb29375748f13437
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-796600
size: "30"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-796600
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: 529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-796600 image ls --format yaml --alsologtostderr:
W1226 22:18:38.638532    4524 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1226 22:18:38.747002    4524 out.go:296] Setting OutFile to fd 1612 ...
I1226 22:18:38.765001    4524 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:18:38.765001    4524 out.go:309] Setting ErrFile to fd 1616...
I1226 22:18:38.765001    4524 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:18:38.786006    4524 config.go:182] Loaded profile config "functional-796600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 22:18:38.786006    4524 config.go:182] Loaded profile config "functional-796600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 22:18:38.787008    4524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-796600 ).state
I1226 22:18:41.239329    4524 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1226 22:18:41.239405    4524 main.go:141] libmachine: [stderr =====>] : 
I1226 22:18:41.253750    4524 ssh_runner.go:195] Run: systemctl --version
I1226 22:18:41.253750    4524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-796600 ).state
I1226 22:18:43.691691    4524 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1226 22:18:43.691919    4524 main.go:141] libmachine: [stderr =====>] : 
I1226 22:18:43.691989    4524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-796600 ).networkadapters[0]).ipaddresses[0]
I1226 22:18:46.428276    4524 main.go:141] libmachine: [stdout =====>] : 172.21.180.84

                                                
                                                
I1226 22:18:46.428390    4524 main.go:141] libmachine: [stderr =====>] : 
I1226 22:18:46.428589    4524 sshutil.go:53] new ssh client: &{IP:172.21.180.84 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-796600\id_rsa Username:docker}
I1226 22:18:46.547472    4524 ssh_runner.go:235] Completed: systemctl --version: (5.2937224s)
I1226 22:18:46.558046    4524 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (8.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (28.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-796600 ssh pgrep buildkitd: exit status 1 (10.5302672s)

                                                
                                                
** stderr ** 
	W1226 22:18:38.639532    6952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image build -t localhost/my-image:functional-796600 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image build -t localhost/my-image:functional-796600 testdata\build --alsologtostderr: (10.6826229s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-796600 image build -t localhost/my-image:functional-796600 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in ea6cde700276
Removing intermediate container ea6cde700276
---> 6c8652193d4b
Step 3/3 : ADD content.txt /
---> 80718dfd921e
Successfully built 80718dfd921e
Successfully tagged localhost/my-image:functional-796600
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-796600 image build -t localhost/my-image:functional-796600 testdata\build --alsologtostderr:
W1226 22:18:49.166181    4452 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1226 22:18:49.251387    4452 out.go:296] Setting OutFile to fd 1284 ...
I1226 22:18:49.277286    4452 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:18:49.277369    4452 out.go:309] Setting ErrFile to fd 1440...
I1226 22:18:49.277418    4452 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:18:49.295598    4452 config.go:182] Loaded profile config "functional-796600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 22:18:49.312872    4452 config.go:182] Loaded profile config "functional-796600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1226 22:18:49.313831    4452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-796600 ).state
I1226 22:18:51.720657    4452 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1226 22:18:51.720657    4452 main.go:141] libmachine: [stderr =====>] : 
I1226 22:18:51.737907    4452 ssh_runner.go:195] Run: systemctl --version
I1226 22:18:51.737907    4452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-796600 ).state
I1226 22:18:54.042694    4452 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1226 22:18:54.042852    4452 main.go:141] libmachine: [stderr =====>] : 
I1226 22:18:54.042919    4452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-796600 ).networkadapters[0]).ipaddresses[0]
I1226 22:18:56.719700    4452 main.go:141] libmachine: [stdout =====>] : 172.21.180.84

                                                
                                                
I1226 22:18:56.719700    4452 main.go:141] libmachine: [stderr =====>] : 
I1226 22:18:56.719878    4452 sshutil.go:53] new ssh client: &{IP:172.21.180.84 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-796600\id_rsa Username:docker}
I1226 22:18:56.836470    4452 ssh_runner.go:235] Completed: systemctl --version: (5.0985641s)
I1226 22:18:56.836470    4452 build_images.go:151] Building image from path: C:\Users\jenkins.minikube1\AppData\Local\Temp\build.1855480826.tar
I1226 22:18:56.850756    4452 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1226 22:18:56.883012    4452 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1855480826.tar
I1226 22:18:56.891685    4452 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1855480826.tar: stat -c "%s %y" /var/lib/minikube/build/build.1855480826.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1855480826.tar': No such file or directory
I1226 22:18:56.891866    4452 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\AppData\Local\Temp\build.1855480826.tar --> /var/lib/minikube/build/build.1855480826.tar (3072 bytes)
I1226 22:18:56.959851    4452 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1855480826
I1226 22:18:56.989526    4452 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1855480826 -xf /var/lib/minikube/build/build.1855480826.tar
I1226 22:18:57.005429    4452 docker.go:346] Building image: /var/lib/minikube/build/build.1855480826
I1226 22:18:57.016572    4452 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-796600 /var/lib/minikube/build/build.1855480826
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1226 22:18:59.611408    4452 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-796600 /var/lib/minikube/build/build.1855480826: (2.5942333s)
I1226 22:18:59.628195    4452 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1855480826
I1226 22:18:59.660325    4452 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1855480826.tar
I1226 22:18:59.677972    4452 build_images.go:207] Built localhost/my-image:functional-796600 from C:\Users\jenkins.minikube1\AppData\Local\Temp\build.1855480826.tar
I1226 22:18:59.678057    4452 build_images.go:123] succeeded building to: functional-796600
I1226 22:18:59.678057    4452 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image ls: (7.5262413s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (28.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.8309155s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-796600
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (24.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image load --daemon gcr.io/google-containers/addon-resizer:functional-796600 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image load --daemon gcr.io/google-containers/addon-resizer:functional-796600 --alsologtostderr: (16.5063521s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image ls: (7.6745714s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (24.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (18.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image load --daemon gcr.io/google-containers/addon-resizer:functional-796600 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image load --daemon gcr.io/google-containers/addon-resizer:functional-796600 --alsologtostderr: (11.1483329s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image ls: (7.5108457s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (18.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (24.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.3339198s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-796600
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image load --daemon gcr.io/google-containers/addon-resizer:functional-796600 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image load --daemon gcr.io/google-containers/addon-resizer:functional-796600 --alsologtostderr: (13.8939661s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image ls: (7.433936s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (24.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image save gcr.io/google-containers/addon-resizer:functional-796600 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image save gcr.io/google-containers/addon-resizer:functional-796600 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.2444649s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image rm gcr.io/google-containers/addon-resizer:functional-796600 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image rm gcr.io/google-containers/addon-resizer:functional-796600 --alsologtostderr: (7.5358881s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image ls: (7.4633606s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (17.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.8047642s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image ls: (7.4740101s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (17.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-796600
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-796600 image save --daemon gcr.io/google-containers/addon-resizer:functional-796600 --alsologtostderr
E1226 22:18:36.128773   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-796600 image save --daemon gcr.io/google-containers/addon-resizer:functional-796600 --alsologtostderr: (9.4089074s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-796600
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.84s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.52s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-796600
--- PASS: TestFunctional/delete_addon-resizer_images (0.52s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-796600
--- PASS: TestFunctional/delete_my-image_image (0.20s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-796600
--- PASS: TestFunctional/delete_minikube_cached_images (0.20s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (199.88s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-722400 --driver=hyperv
E1226 22:23:36.123223   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 22:24:01.504280   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:24:01.518682   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:24:01.534264   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:24:01.564573   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:24:01.612977   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:24:01.706532   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:24:01.878020   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:24:02.206753   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:24:02.861362   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:24:04.144609   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:24:06.708904   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:24:11.836475   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:24:22.086903   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:24:42.577759   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-722400 --driver=hyperv: (3m19.8781041s)
--- PASS: TestImageBuild/serial/Setup (199.88s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-722400
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-722400: (9.5909652s)
--- PASS: TestImageBuild/serial/NormalBuild (9.59s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.37s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-722400
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-722400: (9.3650182s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.37s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-722400
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-722400: (7.79489s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-722400
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-722400: (7.6700495s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.67s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (244.92s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-684000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv
E1226 22:26:45.483763   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:28:36.115002   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 22:29:01.499444   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:29:29.329432   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-684000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv: (4m4.9205208s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (244.92s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (39.84s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-684000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-684000 addons enable ingress --alsologtostderr -v=5: (39.8362605s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (39.84s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (14.78s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-684000 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-684000 addons enable ingress-dns --alsologtostderr -v=5: (14.7798355s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (14.78s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (92.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-684000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-684000 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-684000 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5d70b66a-7928-464d-a44f-d95e3a6823dc] Pending
helpers_test.go:344: "nginx" [5d70b66a-7928-464d-a44f-d95e3a6823dc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5d70b66a-7928-464d-a44f-d95e3a6823dc] Running
E1226 22:31:39.348789   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 33.0074347s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-684000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-684000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.506815s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p ingress-addon-legacy-684000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W1226 22:31:40.056788    1660 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-684000 replace --force -f testdata\ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-684000 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-684000 ip: (2.5602732s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.21.181.123
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-684000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-684000 addons disable ingress-dns --alsologtostderr -v=1: (23.1351738s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-684000 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-684000 addons disable ingress --alsologtostderr -v=1: (21.8170933s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (92.35s)

                                                
                                    
x
+
TestJSONOutput/start/Command (205.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-321400 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E1226 22:33:36.121703   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 22:34:01.500468   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:36:05.440416   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 22:36:05.456596   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 22:36:05.472202   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 22:36:05.504747   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 22:36:05.551591   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 22:36:05.646087   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 22:36:05.818519   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 22:36:06.147782   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 22:36:06.961839   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 22:36:08.255716   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 22:36:10.822815   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 22:36:15.944471   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 22:36:26.199638   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 22:36:46.683611   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-321400 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m25.5146309s)
--- PASS: TestJSONOutput/start/Command (205.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.95s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-321400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-321400 --output=json --user=testUser: (7.9485536s)
--- PASS: TestJSONOutput/pause/Command (7.95s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.8s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-321400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-321400 --output=json --user=testUser: (7.8046861s)
--- PASS: TestJSONOutput/unpause/Command (7.80s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (29.25s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-321400 --output=json --user=testUser
E1226 22:37:27.658074   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-321400 --output=json --user=testUser: (29.2544357s)
--- PASS: TestJSONOutput/stop/Command (29.25s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.62s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-766000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-766000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (309.9821ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4d235067-3fa1-4054-8752-83125aac458f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-766000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"89db03b5-8556-418b-8419-f02540faa4e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube1\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"3925cd05-bf94-4e70-bd45-78c4f3eeda90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6eb8623f-8415-4848-b10f-95fa44b67e0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"ab5f161f-6460-4472-a432-51549a74dc70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17857"}}
	{"specversion":"1.0","id":"bdf8b465-5e1b-4cd4-bf50-4d70630acdb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7360040a-38a9-47c7-8651-aba366902ba3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 22:37:54.664926   14764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-766000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-766000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-766000: (1.3057836s)
--- PASS: TestErrorJSONOutput (1.62s)

                                                
                                    
x
+
TestMainNoArgs (0.26s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.26s)

                                                
                                    
x
+
TestMinikubeProfile (507.44s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-964400 --driver=hyperv
E1226 22:38:36.115720   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 22:38:49.585002   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 22:39:01.493168   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:40:24.693621   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:41:05.430378   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-964400 --driver=hyperv: (3m20.0552295s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-964400 --driver=hyperv
E1226 22:41:33.438134   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 22:43:36.125824   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 22:44:01.500862   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-964400 --driver=hyperv: (3m15.9088391s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-964400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (15.0278837s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-964400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (15.0002957s)
helpers_test.go:175: Cleaning up "second-964400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-964400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-964400: (42.8681333s)
helpers_test.go:175: Cleaning up "first-964400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-964400
E1226 22:46:05.439941   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-964400: (37.5517134s)
--- PASS: TestMinikubeProfile (507.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (150.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-421200 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E1226 22:48:19.357196   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 22:48:36.121196   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-421200 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m29.6522465s)
--- PASS: TestMountStart/serial/StartWithMountFirst (150.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.68s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-421200 ssh -- ls /minikube-host
E1226 22:49:01.501757   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-421200 ssh -- ls /minikube-host: (9.675032s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.68s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (151.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-421200 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E1226 22:51:05.430618   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-421200 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m30.3080987s)
--- PASS: TestMountStart/serial/StartWithMountSecond (151.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.63s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-421200 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-421200 ssh -- ls /minikube-host: (9.6259295s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.63s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (26.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-421200 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-421200 --alsologtostderr -v=5: (26.7179111s)
--- PASS: TestMountStart/serial/DeleteFirst (26.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.65s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-421200 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-421200 ssh -- ls /minikube-host: (9.646833s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.65s)

                                                
                                    
x
+
TestMountStart/serial/Stop (22.06s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-421200
E1226 22:52:28.811131   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-421200: (22.0592666s)
--- PASS: TestMountStart/serial/Stop (22.06s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (112.87s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-421200
E1226 22:53:36.116347   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 22:54:01.497130   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-421200: (1m51.85471s)
--- PASS: TestMountStart/serial/RestartStopped (112.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.67s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-421200 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-421200 ssh -- ls /minikube-host: (9.6680414s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.67s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (426.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-455300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E1226 22:56:05.430246   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 22:57:04.704938   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 22:58:36.121090   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 22:59:01.507803   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 23:01:05.432258   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
multinode_test.go:86: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-455300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m41.9442646s)
multinode_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 status --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 status --alsologtostderr: (24.1665937s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (426.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (10.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- rollout status deployment/busybox: (3.5398077s)
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- exec busybox-5bc68d56bd-bskhd -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- exec busybox-5bc68d56bd-bskhd -- nslookup kubernetes.io: (1.9799086s)
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- exec busybox-5bc68d56bd-flvvn -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- exec busybox-5bc68d56bd-bskhd -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- exec busybox-5bc68d56bd-flvvn -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- exec busybox-5bc68d56bd-bskhd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-455300 -- exec busybox-5bc68d56bd-flvvn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (10.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (221.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-455300 -v 3 --alsologtostderr
E1226 23:03:36.124479   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 23:04:01.495271   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 23:04:59.361325   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 23:06:05.439634   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
multinode_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-455300 -v 3 --alsologtostderr: (3m5.8926635s)
multinode_test.go:117: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 status --alsologtostderr: (35.858186s)
--- PASS: TestMultiNode/serial/AddNode (221.75s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-455300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (7.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (7.7452832s)
--- PASS: TestMultiNode/serial/ProfileList (7.75s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (366.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 status --output json --alsologtostderr: (35.9891931s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 cp testdata\cp-test.txt multinode-455300:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 cp testdata\cp-test.txt multinode-455300:/home/docker/cp-test.txt: (9.4747923s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300 "sudo cat /home/docker/cp-test.txt": (9.5017976s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile3257950645\001\cp-test_multinode-455300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile3257950645\001\cp-test_multinode-455300.txt: (9.6361885s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300 "sudo cat /home/docker/cp-test.txt": (9.5361119s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300:/home/docker/cp-test.txt multinode-455300-m02:/home/docker/cp-test_multinode-455300_multinode-455300-m02.txt
E1226 23:08:36.122988   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300:/home/docker/cp-test.txt multinode-455300-m02:/home/docker/cp-test_multinode-455300_multinode-455300-m02.txt: (16.4921736s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300 "sudo cat /home/docker/cp-test.txt": (9.5060797s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m02 "sudo cat /home/docker/cp-test_multinode-455300_multinode-455300-m02.txt"
E1226 23:09:01.504737   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m02 "sudo cat /home/docker/cp-test_multinode-455300_multinode-455300-m02.txt": (9.5791085s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300:/home/docker/cp-test.txt multinode-455300-m03:/home/docker/cp-test_multinode-455300_multinode-455300-m03.txt
E1226 23:09:08.812277   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300:/home/docker/cp-test.txt multinode-455300-m03:/home/docker/cp-test_multinode-455300_multinode-455300-m03.txt: (16.6868376s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300 "sudo cat /home/docker/cp-test.txt": (9.6471563s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m03 "sudo cat /home/docker/cp-test_multinode-455300_multinode-455300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m03 "sudo cat /home/docker/cp-test_multinode-455300_multinode-455300-m03.txt": (9.6511692s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 cp testdata\cp-test.txt multinode-455300-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 cp testdata\cp-test.txt multinode-455300-m02:/home/docker/cp-test.txt: (9.5703861s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m02 "sudo cat /home/docker/cp-test.txt": (9.4847233s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile3257950645\001\cp-test_multinode-455300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile3257950645\001\cp-test_multinode-455300-m02.txt: (9.591193s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m02 "sudo cat /home/docker/cp-test.txt": (9.6683642s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300-m02:/home/docker/cp-test.txt multinode-455300:/home/docker/cp-test_multinode-455300-m02_multinode-455300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300-m02:/home/docker/cp-test.txt multinode-455300:/home/docker/cp-test_multinode-455300-m02_multinode-455300.txt: (16.9354862s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m02 "sudo cat /home/docker/cp-test.txt": (9.7077235s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300 "sudo cat /home/docker/cp-test_multinode-455300-m02_multinode-455300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300 "sudo cat /home/docker/cp-test_multinode-455300-m02_multinode-455300.txt": (9.5914093s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300-m02:/home/docker/cp-test.txt multinode-455300-m03:/home/docker/cp-test_multinode-455300-m02_multinode-455300-m03.txt
E1226 23:11:05.438164   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300-m02:/home/docker/cp-test.txt multinode-455300-m03:/home/docker/cp-test_multinode-455300-m02_multinode-455300-m03.txt: (16.8073615s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m02 "sudo cat /home/docker/cp-test.txt": (9.6279951s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m03 "sudo cat /home/docker/cp-test_multinode-455300-m02_multinode-455300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m03 "sudo cat /home/docker/cp-test_multinode-455300-m02_multinode-455300-m03.txt": (9.5964464s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 cp testdata\cp-test.txt multinode-455300-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 cp testdata\cp-test.txt multinode-455300-m03:/home/docker/cp-test.txt: (9.6063631s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m03 "sudo cat /home/docker/cp-test.txt": (9.5766141s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile3257950645\001\cp-test_multinode-455300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile3257950645\001\cp-test_multinode-455300-m03.txt: (9.4951964s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m03 "sudo cat /home/docker/cp-test.txt": (9.6431609s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300-m03:/home/docker/cp-test.txt multinode-455300:/home/docker/cp-test_multinode-455300-m03_multinode-455300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300-m03:/home/docker/cp-test.txt multinode-455300:/home/docker/cp-test_multinode-455300-m03_multinode-455300.txt: (16.7612381s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m03 "sudo cat /home/docker/cp-test.txt": (9.5720235s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300 "sudo cat /home/docker/cp-test_multinode-455300-m03_multinode-455300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300 "sudo cat /home/docker/cp-test_multinode-455300-m03_multinode-455300.txt": (9.56164s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300-m03:/home/docker/cp-test.txt multinode-455300-m02:/home/docker/cp-test_multinode-455300-m03_multinode-455300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 cp multinode-455300-m03:/home/docker/cp-test.txt multinode-455300-m02:/home/docker/cp-test_multinode-455300-m03_multinode-455300-m02.txt: (16.5789079s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m03 "sudo cat /home/docker/cp-test.txt": (9.5878783s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m02 "sudo cat /home/docker/cp-test_multinode-455300-m03_multinode-455300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 ssh -n multinode-455300-m02 "sudo cat /home/docker/cp-test_multinode-455300-m03_multinode-455300-m02.txt": (9.5814916s)
--- PASS: TestMultiNode/serial/CopyFile (366.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (72.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 node stop m03
E1226 23:13:36.129454   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
multinode_test.go:238: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 node stop m03: (19.3403668s)
multinode_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 status
E1226 23:13:44.720173   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 23:14:01.498318   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-455300 status: exit status 7 (26.2547506s)

                                                
                                                
-- stdout --
	multinode-455300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-455300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-455300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 23:13:43.374922   11140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:251: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-455300 status --alsologtostderr: exit status 7 (26.4898147s)

                                                
                                                
-- stdout --
	multinode-455300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-455300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-455300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 23:14:09.623572    2232 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1226 23:14:09.712236    2232 out.go:296] Setting OutFile to fd 1800 ...
	I1226 23:14:09.712236    2232 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 23:14:09.712236    2232 out.go:309] Setting ErrFile to fd 1812...
	I1226 23:14:09.712236    2232 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 23:14:09.731940    2232 out.go:303] Setting JSON to false
	I1226 23:14:09.731940    2232 mustload.go:65] Loading cluster: multinode-455300
	I1226 23:14:09.731940    2232 notify.go:220] Checking for updates...
	I1226 23:14:09.732915    2232 config.go:182] Loaded profile config "multinode-455300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 23:14:09.732915    2232 status.go:255] checking status of multinode-455300 ...
	I1226 23:14:09.734592    2232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:14:11.931384    2232 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:14:11.931384    2232 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:14:11.931384    2232 status.go:330] multinode-455300 host status = "Running" (err=<nil>)
	I1226 23:14:11.931574    2232 host.go:66] Checking if "multinode-455300" exists ...
	I1226 23:14:11.932391    2232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:14:14.118164    2232 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:14:14.118259    2232 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:14:14.118259    2232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:14:16.740016    2232 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 23:14:16.740202    2232 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:14:16.740202    2232 host.go:66] Checking if "multinode-455300" exists ...
	I1226 23:14:16.755754    2232 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 23:14:16.755754    2232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300 ).state
	I1226 23:14:18.902368    2232 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:14:18.902368    2232 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:14:18.902368    2232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300 ).networkadapters[0]).ipaddresses[0]
	I1226 23:14:21.581501    2232 main.go:141] libmachine: [stdout =====>] : 172.21.184.4
	
	I1226 23:14:21.581501    2232 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:14:21.581501    2232 sshutil.go:53] new ssh client: &{IP:172.21.184.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300\id_rsa Username:docker}
	I1226 23:14:21.686105    2232 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9303509s)
	I1226 23:14:21.700463    2232 ssh_runner.go:195] Run: systemctl --version
	I1226 23:14:21.727564    2232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 23:14:21.761080    2232 kubeconfig.go:92] found "multinode-455300" server: "https://172.21.184.4:8443"
	I1226 23:14:21.761200    2232 api_server.go:166] Checking apiserver status ...
	I1226 23:14:21.774523    2232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 23:14:21.809859    2232 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2058/cgroup
	I1226 23:14:21.828965    2232 api_server.go:182] apiserver freezer: "6:freezer:/kubepods/burstable/podf2597de8fcd5ba36e5afbfdfbed4b155/0d2ca397ea4bdb1ddc7047352e9fd7fa1bc5a85c9a41ee6070f71efa834fe3bc"
	I1226 23:14:21.843824    2232 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podf2597de8fcd5ba36e5afbfdfbed4b155/0d2ca397ea4bdb1ddc7047352e9fd7fa1bc5a85c9a41ee6070f71efa834fe3bc/freezer.state
	I1226 23:14:21.864647    2232 api_server.go:204] freezer state: "THAWED"
	I1226 23:14:21.864712    2232 api_server.go:253] Checking apiserver healthz at https://172.21.184.4:8443/healthz ...
	I1226 23:14:21.874536    2232 api_server.go:279] https://172.21.184.4:8443/healthz returned 200:
	ok
	I1226 23:14:21.874536    2232 status.go:421] multinode-455300 apiserver status = Running (err=<nil>)
	I1226 23:14:21.874730    2232 status.go:257] multinode-455300 status: &{Name:multinode-455300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1226 23:14:21.874730    2232 status.go:255] checking status of multinode-455300-m02 ...
	I1226 23:14:21.875589    2232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:14:24.078861    2232 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:14:24.079117    2232 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:14:24.079117    2232 status.go:330] multinode-455300-m02 host status = "Running" (err=<nil>)
	I1226 23:14:24.079198    2232 host.go:66] Checking if "multinode-455300-m02" exists ...
	I1226 23:14:24.079949    2232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:14:26.303132    2232 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:14:26.303379    2232 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:14:26.303443    2232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:14:28.896123    2232 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 23:14:28.896282    2232 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:14:28.896327    2232 host.go:66] Checking if "multinode-455300-m02" exists ...
	I1226 23:14:28.909671    2232 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 23:14:28.909671    2232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m02 ).state
	I1226 23:14:31.073617    2232 main.go:141] libmachine: [stdout =====>] : Running
	
	I1226 23:14:31.073881    2232 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:14:31.073881    2232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-455300-m02 ).networkadapters[0]).ipaddresses[0]
	I1226 23:14:33.670773    2232 main.go:141] libmachine: [stdout =====>] : 172.21.187.58
	
	I1226 23:14:33.671048    2232 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:14:33.671455    2232 sshutil.go:53] new ssh client: &{IP:172.21.187.58 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-455300-m02\id_rsa Username:docker}
	I1226 23:14:33.791636    2232 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8811217s)
	I1226 23:14:33.805444    2232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 23:14:33.826650    2232 status.go:257] multinode-455300-m02 status: &{Name:multinode-455300-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1226 23:14:33.826650    2232 status.go:255] checking status of multinode-455300-m03 ...
	I1226 23:14:33.827437    2232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-455300-m03 ).state
	I1226 23:14:35.947471    2232 main.go:141] libmachine: [stdout =====>] : Off
	
	I1226 23:14:35.947471    2232 main.go:141] libmachine: [stderr =====>] : 
	I1226 23:14:35.947594    2232 status.go:330] multinode-455300-m03 host status = "Stopped" (err=<nil>)
	I1226 23:14:35.947594    2232 status.go:343] host is not running, skipping remaining checks
	I1226 23:14:35.947594    2232 status.go:257] multinode-455300-m03 status: &{Name:multinode-455300-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (72.09s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (173.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 node start m03 --alsologtostderr
E1226 23:16:05.437891   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 node start m03 --alsologtostderr: (2m17.2036021s)
multinode_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-455300 status
multinode_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-455300 status: (36.2023836s)
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (173.63s)

                                                
                                    
x
+
TestPreload (462.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-645900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E1226 23:28:36.122032   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 23:29:01.502315   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 23:30:24.724478   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
E1226 23:31:05.428470   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-645900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (3m47.7591033s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-645900 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-645900 image pull gcr.io/k8s-minikube/busybox: (8.5025726s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-645900
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-645900: (35.0849071s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-645900 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E1226 23:33:36.123624   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 23:34:01.492959   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-645900 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m26.8400238s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-645900 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-645900 image list: (7.3455439s)
helpers_test.go:175: Cleaning up "test-preload-645900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-645900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-645900: (37.0923815s)
--- PASS: TestPreload (462.63s)

                                                
                                    
x
+
TestScheduledStopWindows (330.89s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-663500 --memory=2048 --driver=hyperv
E1226 23:36:05.428113   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
E1226 23:38:19.376395   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 23:38:36.117038   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
E1226 23:39:01.504467   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-796600\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-663500 --memory=2048 --driver=hyperv: (3m17.7397171s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-663500 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-663500 --schedule 5m: (10.8351672s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-663500 -n scheduled-stop-663500
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-663500 -n scheduled-stop-663500: exit status 1 (10.0335478s)

                                                
                                                
** stderr ** 
	W1226 23:39:18.407095   15124 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-663500 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-663500 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.7456757s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-663500 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-663500 --schedule 5s: (10.6122415s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-663500
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-663500: exit status 7 (2.4316272s)

                                                
                                                
-- stdout --
	scheduled-stop-663500
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 23:40:48.806755    3444 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-663500 -n scheduled-stop-663500
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-663500 -n scheduled-stop-663500: exit status 7 (2.4339668s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 23:40:51.240533    3396 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-663500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-663500
E1226 23:41:05.435034   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-663500: (27.0340736s)
--- PASS: TestScheduledStopWindows (330.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (1107.28s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-183800 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-183800 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv: (4m56.7347619s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-183800
version_upgrade_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-183800: (29.1729002s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-183800 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-183800 status --format={{.Host}}: exit status 7 (2.7426298s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 23:55:48.935195    6912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-183800 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-183800 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (4m32.9116292s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-183800 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-183800 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-183800 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv: exit status 106 (354.4011ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-183800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1227 00:00:24.847380    1148 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-183800
	    minikube start -p kubernetes-upgrade-183800 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1838002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-183800 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-183800 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
E1227 00:01:05.427745   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-684000\client.crt: The system cannot find the path specified.
version_upgrade_test.go:288: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-183800 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (7m43.7932297s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-183800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-183800
E1227 00:08:36.119826   10728 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-839600\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-183800: (41.3195118s)
--- PASS: TestKubernetesUpgrade (1107.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-152600 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-152600 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (432.402ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-152600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 23:41:20.733557   10832 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                    
x
+
TestPause/serial/Start (341.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-178300 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-178300 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (5m41.7340523s)
--- PASS: TestPause/serial/Start (341.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (10.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-682800
version_upgrade_test.go:219: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-682800: (10.431817s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (10.43s)

                                                
                                    

Test skip (32/202)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-796600 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-796600 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 8132: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.05s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-796600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-796600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0461805s)

                                                
                                                
-- stdout --
	* [functional-796600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 22:15:13.262063   10844 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1226 22:15:13.362109   10844 out.go:296] Setting OutFile to fd 1188 ...
	I1226 22:15:13.363108   10844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:15:13.363108   10844 out.go:309] Setting ErrFile to fd 1032...
	I1226 22:15:13.363108   10844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:15:13.401615   10844 out.go:303] Setting JSON to false
	I1226 22:15:13.407644   10844 start.go:128] hostinfo: {"hostname":"minikube1","uptime":3312,"bootTime":1703625601,"procs":207,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1226 22:15:13.408626   10844 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 22:15:13.412603   10844 out.go:177] * [functional-796600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1226 22:15:13.416623   10844 notify.go:220] Checking for updates...
	I1226 22:15:13.418653   10844 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 22:15:13.421604   10844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 22:15:13.424647   10844 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1226 22:15:13.427605   10844 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 22:15:13.430608   10844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 22:15:13.435605   10844 config.go:182] Loaded profile config "functional-796600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 22:15:13.436612   10844 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-796600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-796600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0589401s)

                                                
                                                
-- stdout --
	* [functional-796600] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W1226 22:15:08.194205   10628 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1226 22:15:08.284916   10628 out.go:296] Setting OutFile to fd 1032 ...
	I1226 22:15:08.285880   10628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:15:08.285880   10628 out.go:309] Setting ErrFile to fd 1272...
	I1226 22:15:08.285880   10628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:15:08.314704   10628 out.go:303] Setting JSON to false
	I1226 22:15:08.319946   10628 start.go:128] hostinfo: {"hostname":"minikube1","uptime":3307,"bootTime":1703625601,"procs":205,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3803 Build 19045.3803","kernelVersion":"10.0.19045.3803 Build 19045.3803","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1226 22:15:08.319946   10628 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1226 22:15:08.323571   10628 out.go:177] * [functional-796600] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.3803 Build 19045.3803
	I1226 22:15:08.327694   10628 notify.go:220] Checking for updates...
	I1226 22:15:08.331794   10628 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1226 22:15:08.334791   10628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 22:15:08.337812   10628 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1226 22:15:08.340814   10628 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 22:15:08.343846   10628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 22:15:08.347795   10628 config.go:182] Loaded profile config "functional-796600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1226 22:15:08.348794   10628 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard